Easy HPC in the Big Easy: An SC10 Interview with Bill Hilf

By Nicole Hemsoth

November 17, 2010

During SC10 in New Orleans this week, our editor spent an hour with Bill Hilf to discuss a wide range of topics, including Microsoft’s Azure cloud offering, both in terms of some recent newsworthy enhancements and the announcement of a certain other major public cloud that now boasts GPU capabilities. This led to discussions about performance, job scheduling requirements for hosting compute-intensive and HPC applications in a cloud environment and more general topics related to the company’s strategy  as the “other” public cloud continues to evolve, albeit via a different course. We’ll be bringing more details from this chat as the week goes on…

Microsoft’s Technical Computing Group, which focuses on HPC, parallel and cloud computing has been evolving as of late, a fact that is due in large part to input from its General Manager, Bill Hilf and his belief that the only way to broaden HPC access is to focus on making simplicity of access to high-performance computing applications and resources as easy as filling in rectangles in an Excel spreadsheet.

Ultimate abstraction of complexity might strike some of you as unrealistic. The thought that your applications can somehow be negotiated and abstracted to such a high level that they require little more than data entry does seem far-fetched but clearly, for Microsoft, the effort make this reality is not simply a priority so they can better engage that elusive missing middle of HPC users—it’s the key to their survival in the HPC space.

In Hilf’s view, technical computing users are going to form the backbone of Azure, hence the focus on HPC applications in any number of the company’s cloud-related announcements.

 This includes, for example, the news today that BLAST had been ported to their cloud and was being offered “free” (which is good since it’s really free to begin with) to users with Azure accounts. We’ll get to that item in a moment, but for now, back to how Bill Hilf wants to destroy HPC…or at least the weight of that acronym….in other words, by making it synonymous with computing in general.

“It goes far beyond building operating systems; it’s about building end user tools; it’s about making it all seamless like we did recently with BLAST. We ported it to Azure, which was good, but there was still a lot of this that was really difficult. Like, how do you go and distribute all of this across Azure? And what is Azure then exactly? And then how do you track progress when it’s thousands and thousands of cores and any of this could be anywhere since it’s a global OS. Really, your job could be running anywhere; in Shnghai or elsewhere—so how do you track it or get one answer back across thousands of machines?”

Easing into Old Models

As Bill Hilf noted, a couple of years ago it became clear that Microsoft’s efforts to become major parties in the HPC server space was not working as envisioned so a shift in ideology was necessary—that shift actually brought Microsoft right back to where it got its start in the first place so long ago—removing complexity and thereby taking vastly complicated programming and hiding it under a seamless veneer of usability.

That veneer has been so seamless that we can all too often forget completely what lies behind that Excel spreadsheet or, for that matter, the Word document that the first draft of this article was created on. Here’s the idea though, and it does go beyond removing complexity and adding the intuitive UI…By taking such steps to deliver complex applications to the masses via these smooth user interfaces and focusig on ease of use above all, what we consider to be a powerful applications (the “we” is loose and general here) no longer are perceived as powerful necessarily because they’ve become ubiquitous.

So more specifically, Hilf is saying, “we want to eventually make HPC, that acronym, meaningless” in the sense that users, even highly technical users, will no longer consider their applications in the context of high-performance or general purpose—or anything. It will all simply become computation. Plain and simple.

This can be a difficult idea to wrap the brain around, especially during a conference that is dedicated to that acronym but in some ways, the predominance of complexity—in fact, the celebration of it here in New Orleans this week—is actually exactly what Microsoft wants to be rid of. They want to open doors of access using that same tried and true model of delivering mainstream products, even high-end ones, to everyone who has enough computer savvy to click a few buttons. And you know, while some of it seems far off, there is something to be said for the old Microsoft simplification trick.

To give this some added context, our conversation actually began with a mild question about what he thought about their biggest public cloud competitor, Amazon Web Services, delivering its new Cluster GPU instance type—it didn’t start with the conversation about ease as central to Microsoft’s refreshed Technical Computing ambitions and strategy, but all of the above was necessary prefacing.

While I was leading up to a “yes, but when will you have a similar offering” at first, Hilf took another route and suggested that while the Amazon GPU announcement was “technically and academically interesting, on a theory level that is” it’s not much more than that since it’s essentially giving those relative few with the programming incentive and skill set. And this brings back his point yet again—what good is all the new cloud-delivered access to seemingly endless infrastructure if only some are able to use it?

This point is well taken. Most HPC users have depth of knowledge with one language but the researchers and users, on the other hand, want to focus on their research or development mission and minimize the time they need to take to become system admins if at all possible. With something like GPU computing capabilities being introduced in the public cloud, even if some of these potential users knew very well that they would be able to achieve significant performance increases via GPU acceleration, there is no layer of abstraction present to disguise the ugly CUDA barracuda behind it.

More generally speaking, Bill Hilf stated the following about GPUs in the cloud (or otherwise for that matter) and related this back to Microsoft’s “big picture” about how to make serious inroads back into HPC via the old “mainstreaming it” trick…

“If you look at the Top500, one of the most startling things is that most of them in the top ten are using GPUs; that general ideal of huge parallelism through 500 cores on one GPU versus four cores on CPU?—Well people are really starting to understand it and how to exploit it. So for this HPC group, they’re all asking, ‘how do we take advantage of the hardware and also, how do we make it easy?’

Having GPUs in a cloud is technically interesting but it doesn’t break any barriers because it’s still complex. Just offering them doesn’t make it more accessible; you still have to write a low-level CUDA program in a very specific hardware-oriented language for one specific GPU from one vendor. It’s all really technically complicated and therefore it’s still just a niche thing—it’s not like Visual basic or Word for instance where that complexity is abstracted—all of this is just technically interesting but it’s not easy and easy is the missing ingredient as we see it.”

“Mainstreaming” HPC Applications

Although the conversation didn’t hinge on GPUs specifically, that was a great frame for theme of the discussion, which all hinged on ease of use. Hilf held their porting of BLAST to Azure up to the light as an example of this pairing of “mainstreaming” HPC applications and providing greater ease as one in a coming series of announcements related to easy HPC.

What we are going to see from Microsoft in the next year is represented in their announcement about the BLAST case studies. Hilf says that this is the first of many coming examples that are set to show that the cloud can prove that which was otherwise thought to be impossible. The company worked with a major hospital that wanted to take advantage of BLAST by running what might be one of the most comprehensive BLAST-based searches to date. They wanted to search against the entire protein database—which is 10 million sequences, which then ends up being more than one hundred billion comparisons. This is a rather staggering project in terms of scope if yo’re reliant on NCBI and its strained resources, for example. Actually, it’s a staggering project no matter what you’re using.

Azure handled this request, however and Hilf claims that without any kind of special pricing, the cost was about $18,000 for this huge run that would have required millions in hardware and staffing investment. Oh, and with setup time included (one day) they ran the whole job in six days while keeping 4000 cores busy around the clock.

Hilf wants these case studies to show how Microsoft is recommitting to HPC; and thus carving out a slice of the arket for itself that might have seemed a little farther off not even a year ago.

Unleashing the Schedulers

Aside from a greater emphasis on ease of use and abstraction of complexity, we talked for quite some time about the role of providing automation and policies for governing how the cloud is used and what parameters users can work by. This is one area where Azure could have a leg up on Amazon.

One key to Microsoft’s success for HPC applications in the cloud (and there is no debate that it’s the embarrassingly parallel stuff we’re talking about here for the most part) hinges on its added ability to have some degree of automation to allow for scaling of resources for bursty needs.

The odd thing about this job scheduler for Azure is that it’s push-buttton, not fully automated to scale according to projected workloads or sudden spikes in need. Hilf seemed to be suggesting that while eventually greater automation would be a priority, for now during this proof of concept phase for a lot of technical computing users in their cloud, IT managers want full control over how the cloud experiment goes.

For instance, he used a traumatic tale from his personal life, noting that while in Asia recently, he used a number of features on his phone without realizing how the charges were mounting and arrived back in the states with a $700 bill. He sees how easily this can happen and knows that if a cloud experiment gets a little out of control and no one quite sees the full extent of how resources are being allocated and used, this could mean the death of the pending cloud test phase for that user—and probably the death of employment for the system admin who let this slip under his radar as well.

The topic of job schedulers in the cloud isn’t a sexy one but it is increasingly critical for users and for Microsoft, who again wants to add as many simplification features as possible, including the ability to see and manage resources.

We might take some of the interview segment about job schedulers to task in a more focused post as the din of SC dies down and we’ve had time to talk to one of the stars in the HPC scheduling show tomorrow, Platform Computing.

More to come from this lengthy interview later this week….
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and received a patent for a "processor design, which allows rep Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communicat Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

  • arrow
  • Click Here for More Headlines
  • arrow
Share This