This Week in HPC News
From some fresh Department of Defense supercomputing investment, a leadership reboot at Microsoft, and some notable developments for companies like SGI and Mellanox, this has been a whirlwind week–just the kind that keeps things interesting…
With the HPC Advisory Council’s Stanford Conference and Exascale Workshop happening earlier in the week, there were plenty of presentations about the numerous software, memory, efficiency and other issues at the computational pinnacle. For those who were unable to attend the event, we secured an early summary of some of the key themes from a few attendees. Some of the presentations that are mentioned can be found at the HPC Advisory Council’s site, including the one that seemed to garner the most attention—Intel’s Mark Seager offered up a detailed overview of exascale application and architectural challenges. D.K. Panda’s presentation on exascale programming models also drew quite a bit of commentary—it’s worth a look here.
Before we get to the news roundup for the week, it seemed worth noting that a few of our own themes sprung up. Quantum computing had some airplay with both an interview with commercial quantum computing company D-Wave’s president, Bo Ewald, talking with us for a podcast episode about key user successes and challenges ahead. Adding to that, we reported on a new development toward integrated quantum systems. While not supercomputing per se quite yet, it’s hard to ignore the activity around this subject over the last year. Our podcast last year on the skeptical side drew an eyebrow raise—if you missed it, and need some balanced perspective on what’s really happening with these systems, it’s worth a second opinion from physicist, Dr. Helmut Katzgraber.
On the audio front, we spoke with a researcher at LBNL’s Advanced Light Source about data demands there; talked with the new director of TACC, Dan Stanzione about their experiences on their large Xeon Phi supercomputer, Stampede; checked in with market research experts to get a drift of what’s next for the HPC market in 2014, and more. Always good to give you eyes a break and listen instead, right?
And so finally, onto the week’s news we go. What we lack in quantity we certainly find in quality of the top items.
We broke the story this week that Cray was coming into a large sum from the Department of Defense. We caught wind of it late last week and got confirmation via a slippery government document announcing formally that the company had been awarded two separate contracts at two sites at just over $21.5 million each. Details are still fuzzy on this with no formal word from Cray, but the DoD confirms that this award has been made.
Cray seems to be everywhere these days; with their YarcData (that is painful to type—it really is) division and new emphasis on Hadoop for both HPC and “big data” enterprise markets, this news is icing on the cake. For those who follow where they’re headed financially, there’s an upcoming investor call on February 13 at 4:30 Eastern. Details can be found on their site.
Not sure if that’s a bad pun or a clever subtitle, or is it just that you don’t want to hear anything else about Microsoft this week. Totally not blaming you for that—but listen. Perhaps there is something to this new leadership, something that will take R&D to the next level.
To be fair, despite our positive spin on what this might mean far down the road, once restructuring and rethinking take place, some have sent notes in the HPCwire direction this week post-this-article that noted Nadella was one of the (several) leaders behind the move to chuck the technical computing group into the cloud division. While the consensus seems to be that Microsoft might be more inclined to reinvest in some cutting edge new tools, the real dollars are going to push HPC as a much smaller slice of the “big compute” pie. “Big compute” to them includes HPC but it’s all wrapped in cloud puff. Not always a prime fit for the workloads they are targeting for their new high-end computing instance types (similar to AWS HPC instances). Tiffany has a solid writeup of what Big Compute might mean for HPC…would you use it? And if not, would AWS’ instances be any better? Because is it more a matter of “why clouds” versus “which clouds” if it’s an issue at all? Interested in these questions…
In addition to managing the HPC Advisory Council event this week, Mellanox managed to spin out some news around its new Mellanox Capital arm.
Mellanox Capital will make investments in start-ups and technology companies that focus on innovative approaches to storage, compute, virtualization, cloud infrastructure, big data, enterprise application platforms, and embedded solutions. “As a key component of Mellanox’s innovation strategy, equity investments will provide us with access to technologies, markets, applications and key innovations influencing the data center as well as the high-performance interconnect market,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Mellanox Capital will enhance our relationships with entrepreneurs, opening doors to new markets, customers, alliances, co-investors, and emerging technologies.”
Additionally the company announced the availability of its Ethernet Switch SDK API with full Layer 2 and Layer 3 functionality as an Open Source software platform. The release of the API is a continued step in the Open Ethernet initiative and works in conjunction with Open Compute Project (OCP). Mellanox says the Open SDK API enables the Ethernet community to build open source networking applications and protocols faster and can be a base for a standard Ethernet switch interface.
For those who like their big data and HPC served as separate courses, the pickings are getting slimmer. Many of the system vendors are seeing an ample opportunity for growth outside of “strict” HPC by using the big data key to open new enterprise doors. SGI has been at the forefront of this as well, rounding out the week with an announcement about their partnership with a big data company called Cogn.
As SGI stated, combining their “HPC technology for Big Data analytics, the recently announced plan to develop a SAP HANA in-memory computing appliance, and Cognilytics’ expertise in implementing technologies such as Hadoop and SAP Platform and Analytics solutions including SAP HANA and Predictive Analysis, the partnership enables the enterprise to closely align analytics initiatives with current business models to fully capitalize on results and achieve business objectives.”
There were probably shorter, more succinct ways to say that, but essentially, this is an announcement for the SGI InfiniteData Cluster and SGI UV, which will allow “business and government agencies to utilize Hadoop with faster, greater insight and at lower cost…Coupling the SGI InfiniteData Cluster with Hadoop and SGI UV with SAP HANA, SGI and Cognilytics will be able to provide a turnkey solution in these rapidly growing environments,” says SGI.
On the EDU Front…
Not much news to report from academia, national labs and the like this week, but there are couple of things worth pointing to. In terms of workshops and education the Extreme Scaling Workshop Issued Call for Submissions while the Argonne Training Program on Extreme-Scale Computing has been scheduled for August.
Also, our congrats to the University of Illinois winners of the MEMOCODE Design Contest, using their Convey HC-1system.
That about covers it for the week. Be sure to subscribe to our daily podcast series for your commute or background noise at home and enjoy the weekend. We’ll be back next Thursday night (at some ridiculous hour, as usual) with what we hope will be an equally interesting week’s worth of news.