Easy HPC in the Big Easy: An SC10 Interview with Bill Hilf

By Nicole Hemsoth

November 17, 2010

During SC10 in New Orleans this week, our editor spent an hour with Bill Hilf to discuss a wide range of topics, including Microsoft’s Azure cloud offering, both in terms of some recent newsworthy enhancements and the announcement of a certain other major public cloud that now boasts GPU capabilities. This led to discussions about performance, job scheduling requirements for hosting compute-intensive and HPC applications in a cloud environment and more general topics related to the company’s strategy  as the “other” public cloud continues to evolve, albeit via a different course. We’ll be bringing more details from this chat as the week goes on…

Microsoft’s Technical Computing Group, which focuses on HPC, parallel and cloud computing has been evolving as of late, a fact that is due in large part to input from its General Manager, Bill Hilf and his belief that the only way to broaden HPC access is to focus on making simplicity of access to high-performance computing applications and resources as easy as filling in rectangles in an Excel spreadsheet.

Ultimate abstraction of complexity might strike some of you as unrealistic. The thought that your applications can somehow be negotiated and abstracted to such a high level that they require little more than data entry does seem far-fetched but clearly, for Microsoft, the effort make this reality is not simply a priority so they can better engage that elusive missing middle of HPC users—it’s the key to their survival in the HPC space.

In Hilf’s view, technical computing users are going to form the backbone of Azure, hence the focus on HPC applications in any number of the company’s cloud-related announcements.

 This includes, for example, the news today that BLAST had been ported to their cloud and was being offered “free” (which is good since it’s really free to begin with) to users with Azure accounts. We’ll get to that item in a moment, but for now, back to how Bill Hilf wants to destroy HPC…or at least the weight of that acronym….in other words, by making it synonymous with computing in general.

“It goes far beyond building operating systems; it’s about building end user tools; it’s about making it all seamless like we did recently with BLAST. We ported it to Azure, which was good, but there was still a lot of this that was really difficult. Like, how do you go and distribute all of this across Azure? And what is Azure then exactly? And then how do you track progress when it’s thousands and thousands of cores and any of this could be anywhere since it’s a global OS. Really, your job could be running anywhere; in Shnghai or elsewhere—so how do you track it or get one answer back across thousands of machines?”

Easing into Old Models

As Bill Hilf noted, a couple of years ago it became clear that Microsoft’s efforts to become major parties in the HPC server space was not working as envisioned so a shift in ideology was necessary—that shift actually brought Microsoft right back to where it got its start in the first place so long ago—removing complexity and thereby taking vastly complicated programming and hiding it under a seamless veneer of usability.

That veneer has been so seamless that we can all too often forget completely what lies behind that Excel spreadsheet or, for that matter, the Word document that the first draft of this article was created on. Here’s the idea though, and it does go beyond removing complexity and adding the intuitive UI…By taking such steps to deliver complex applications to the masses via these smooth user interfaces and focusig on ease of use above all, what we consider to be a powerful applications (the “we” is loose and general here) no longer are perceived as powerful necessarily because they’ve become ubiquitous.

So more specifically, Hilf is saying, “we want to eventually make HPC, that acronym, meaningless” in the sense that users, even highly technical users, will no longer consider their applications in the context of high-performance or general purpose—or anything. It will all simply become computation. Plain and simple.

This can be a difficult idea to wrap the brain around, especially during a conference that is dedicated to that acronym but in some ways, the predominance of complexity—in fact, the celebration of it here in New Orleans this week—is actually exactly what Microsoft wants to be rid of. They want to open doors of access using that same tried and true model of delivering mainstream products, even high-end ones, to everyone who has enough computer savvy to click a few buttons. And you know, while some of it seems far off, there is something to be said for the old Microsoft simplification trick.

To give this some added context, our conversation actually began with a mild question about what he thought about their biggest public cloud competitor, Amazon Web Services, delivering its new Cluster GPU instance type—it didn’t start with the conversation about ease as central to Microsoft’s refreshed Technical Computing ambitions and strategy, but all of the above was necessary prefacing.

While I was leading up to a “yes, but when will you have a similar offering” at first, Hilf took another route and suggested that while the Amazon GPU announcement was “technically and academically interesting, on a theory level that is” it’s not much more than that since it’s essentially giving those relative few with the programming incentive and skill set. And this brings back his point yet again—what good is all the new cloud-delivered access to seemingly endless infrastructure if only some are able to use it?

This point is well taken. Most HPC users have depth of knowledge with one language but the researchers and users, on the other hand, want to focus on their research or development mission and minimize the time they need to take to become system admins if at all possible. With something like GPU computing capabilities being introduced in the public cloud, even if some of these potential users knew very well that they would be able to achieve significant performance increases via GPU acceleration, there is no layer of abstraction present to disguise the ugly CUDA barracuda behind it.

More generally speaking, Bill Hilf stated the following about GPUs in the cloud (or otherwise for that matter) and related this back to Microsoft’s “big picture” about how to make serious inroads back into HPC via the old “mainstreaming it” trick…

“If you look at the Top500, one of the most startling things is that most of them in the top ten are using GPUs; that general ideal of huge parallelism through 500 cores on one GPU versus four cores on CPU?—Well people are really starting to understand it and how to exploit it. So for this HPC group, they’re all asking, ‘how do we take advantage of the hardware and also, how do we make it easy?’

Having GPUs in a cloud is technically interesting but it doesn’t break any barriers because it’s still complex. Just offering them doesn’t make it more accessible; you still have to write a low-level CUDA program in a very specific hardware-oriented language for one specific GPU from one vendor. It’s all really technically complicated and therefore it’s still just a niche thing—it’s not like Visual basic or Word for instance where that complexity is abstracted—all of this is just technically interesting but it’s not easy and easy is the missing ingredient as we see it.”

“Mainstreaming” HPC Applications

Although the conversation didn’t hinge on GPUs specifically, that was a great frame for theme of the discussion, which all hinged on ease of use. Hilf held their porting of BLAST to Azure up to the light as an example of this pairing of “mainstreaming” HPC applications and providing greater ease as one in a coming series of announcements related to easy HPC.

What we are going to see from Microsoft in the next year is represented in their announcement about the BLAST case studies. Hilf says that this is the first of many coming examples that are set to show that the cloud can prove that which was otherwise thought to be impossible. The company worked with a major hospital that wanted to take advantage of BLAST by running what might be one of the most comprehensive BLAST-based searches to date. They wanted to search against the entire protein database—which is 10 million sequences, which then ends up being more than one hundred billion comparisons. This is a rather staggering project in terms of scope if yo’re reliant on NCBI and its strained resources, for example. Actually, it’s a staggering project no matter what you’re using.

Azure handled this request, however and Hilf claims that without any kind of special pricing, the cost was about $18,000 for this huge run that would have required millions in hardware and staffing investment. Oh, and with setup time included (one day) they ran the whole job in six days while keeping 4000 cores busy around the clock.

Hilf wants these case studies to show how Microsoft is recommitting to HPC; and thus carving out a slice of the arket for itself that might have seemed a little farther off not even a year ago.

Unleashing the Schedulers

Aside from a greater emphasis on ease of use and abstraction of complexity, we talked for quite some time about the role of providing automation and policies for governing how the cloud is used and what parameters users can work by. This is one area where Azure could have a leg up on Amazon.

One key to Microsoft’s success for HPC applications in the cloud (and there is no debate that it’s the embarrassingly parallel stuff we’re talking about here for the most part) hinges on its added ability to have some degree of automation to allow for scaling of resources for bursty needs.

The odd thing about this job scheduler for Azure is that it’s push-buttton, not fully automated to scale according to projected workloads or sudden spikes in need. Hilf seemed to be suggesting that while eventually greater automation would be a priority, for now during this proof of concept phase for a lot of technical computing users in their cloud, IT managers want full control over how the cloud experiment goes.

For instance, he used a traumatic tale from his personal life, noting that while in Asia recently, he used a number of features on his phone without realizing how the charges were mounting and arrived back in the states with a $700 bill. He sees how easily this can happen and knows that if a cloud experiment gets a little out of control and no one quite sees the full extent of how resources are being allocated and used, this could mean the death of the pending cloud test phase for that user—and probably the death of employment for the system admin who let this slip under his radar as well.

The topic of job schedulers in the cloud isn’t a sexy one but it is increasingly critical for users and for Microsoft, who again wants to add as many simplification features as possible, including the ability to see and manage resources.

We might take some of the interview segment about job schedulers to task in a more focused post as the din of SC dies down and we’ve had time to talk to one of the stars in the HPC scheduling show tomorrow, Platform Computing.

More to come from this lengthy interview later this week….
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some cases, city managers didn’t even know existed. Speaking Read more…

By Doug Black

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

SC17 Student Cluster Competition Configurations: Fewer Nodes, Way More Accelerators

November 16, 2017

The final configurations for each of the SC17 “Donnybrook in Denver” Student Cluster Competition have been released. Fortunately, each team received their equipment shipments on time and undamaged, so the teams are r Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This