IBM’s New Deal Captures Refocused HPC Strategy

By Nicole Hemsoth

January 29, 2014

It’s been quite a month of news for IBM, with the sale of its x86 business to Lenovo, followed by some intense questioning about what this means for their vision (and future) with HPC customers in academia and government in particular. And while some might call this the centerpiece of a shift in strategy, something bigger is looming on Big Blue’s horizon, and the processor piece is only one sliver of the story arc.

But Dave Turek, IBM’s Vice President of Advanced Computing says that ultimately, and yes, even after the Lenovo news, their business will march along to the HPC drum, but with some new beats added to an old tune. After all, as he reminds, IBM was never the only x86 vendor supplying servers to the government and universities. Further, these commodity approaches might not be as well equipped for a data-defined future–one that Turek says requires a honed sense of overall workflow instead of mere flops. Accordingly, IBM is unfurling a grander approach to big science (and big business) problems that blends subtler hues into the HPC server portrait–leading to what might be a completely different picture in the years to come.

Specifically, IBM will be melding some (yet unnamed) upcoming technologies with their vision of data-driven systems that make the concept of workflow and end function of the user requirement paramount. The challenge there isn’t going to be about technology as much as merging these concepts and moving “classic HPC” from its traditional focus. For IBM, the shift involves a wide-ranging view of the entire data lifecycle and, not surprisingly, significant investment in Power.

He noted that IBM’s proposition going forward is “to attack the entire workflow from the perspective of how data is acquired, managed, governed and analyzed in many different areas of the HPC infrastructure, not just the server. The consequence of that is that the nature of what servers look like in the future might change a bit.”

In other words, what IBM sees going forward looks a lot like verbiage around Watson and their Smarter Planet array of technologies—parallels that Turek made explicit during a conversation this week in the wake of some news about a new partnership with Texas A&M that meshes data analytics with high performance computing via IBM-hosted (cloud-delivered) Blue Gene/Q power managed with Platform Computing and leveraging GPFS.  This provides a way for the company to show off the blend of all of its priority tools says Turek, from the file system, the Platform software, and the ability to drive data on a range of applications. The collaboration is, according to IBM, aimed at “improving extraction of Earth-based energy resources, facilitating the smart energy grid, accelerating materials development, improving disease identification and tracking in animals, and fostering better understanding and monitoring of our global food supplies.” Again, all of which have elements aligned with IBM’s Smarter Planet initiative.

“All the servers are on the periphery and the data is in the center of the proposition,” explained Turek. We wanted Texas A&M’s strategy to mirror IBM’s strategy towards data-centric high performance computing that creates an infrastructure that lets them bring different architectures to address the problems at hand. This deal is characteristic of one of the shifts that’s happening in IBM’s overall HPC strategy, he noted. “The goal is to progressively engage clients in collaboration as a way to help us better understand different market segments and problem domains and to build better products… It’s about the right tool for the right problem” says Turek.

Even though this is essentially a collaboration rooted in research HPC, he stressed that the BlueGene piece isn’t the keystone of the story—it’s all about the data problems that are being addressed in novel, practical and workflow-conscious ways. “There’s a needed integration between big data and classic HPC technologies. These are inseparable concept for us going forward. Too often, players in the HPC space have cherry picked an algorithm or set of partial differential equations and they sort of thump their chests and say, ‘look how fast we made this go.’ The fact is, if in the general workflow of interest to the client, that piece of work went from a day to a second, you’ve just improved the overall performance of the much larger workflow by only a few percent.”

“Our industry has been hamstrung by self-defining itself as a vehicle to produce devices that try to optimally solve collections of partial differential equations or nonlinear/linear equations. But that can be such a small part of the overall workflow that constitutes an HPC workflow that we’ve ended up providing a degree of disservice to the industry at large.”

Turek says that when IBM introduced the concept of workflow a couple of years ago in their exascale conversations with the DoE and others, they made it clear that when you begin to look at workflows, it’s necessary to explicitly factor in data management, data flows and data organization as much as it does to pay attention to algorithms per se. When asked how this changes the direction for IBM’s HPC systems of the future, especially without an x86 play for some of their core HPC customers in government and academia, he noted that the data-centric architecture goes far beyond the micro-architecture view of evaluating systems. “You have to take into account where data sits, how its moved, and where its processed. Sure, there’s a processor involved, but there are a lot of ways to attack these problems from an overall workflow perspective.”

The final word from IBM is a steady firm commitment to HPC, a commitment that absolutely extends to the government and academic spaces (those areas being the focus of the question). “We are materially engaged with them in terms of co-design activities for future investments and our own investment in our Power servers to make sure that we’re offering them the best solutions in the world going forward. We’ll continue with these investments in HPC. We will be focused in terms of getting that investment centered on our Power technology and we’ll preserve our business relationship with Lenovo to provide the Intel-based technologies that customers we encounter might require.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Live and in Color, Meet the European Student Cluster Teams

November 21, 2017

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let's get to know them better through the miracle of video..... Team FAU/TUC is a c Read more…

By Dan Olds

SC17 Student Cluster Kick Off – Guts, Glory, Grep

November 21, 2017

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair. It began with a welcome from SC17 chair Bernd Mohr, where he lauded the competition for Read more…

By Dan Olds

Activist Investor Starboard Buys 10.7% Stake in Mellanox; Sale Possible?

November 20, 2017

Starboard Value has reportedly taken a 10.7 percent stake in interconnect specialist Mellanox Technologies, and according to the Wall Street Journal, has urged the company “to improve its margins and stock and explore Read more…

By John Russell

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

Installation of Sierra Supercomputer Steams Along at LLNL

November 20, 2017

Sierra, the 125 petaflops (peak) machine based on IBM’s Power9 chip being built at Lawrence Livermore National Laboratory, sometimes takes a back seat to Summit, the ~200 petaflops system being built at Oak Ridge Natio Read more…

By John Russell

Live and in Color, Meet the European Student Cluster Teams

November 21, 2017

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let's get to know them bet Read more…

By Dan Olds

SC17 Student Cluster Kick Off – Guts, Glory, Grep

November 21, 2017

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair. It began with a welcome from Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This