OpenPOWER Alliances, IBM
Ken King, the General Manager of OpenPOWER Alliances for IBM, is responsible for building IBM’s ecosystem of global partners on top of the POWER architecture.
With traditional supercomputing approaches struggling to keep pace with the growth of big data, IBM’s new OpenPOWER-based systems are expected to bring IBM back into the HPC arena in a big way by using a data-centric approach that will bring compute power to the data, and in doing so minimize data transfer bottlenecks, energy consumption, and costs. These systems are the debut of OpenPOWER innovation in supercomputing and the result of the collaboration of OpenPOWER Foundation members, including IBM, NVIDIA and Mellanox.
IBM kicked off this year’s Supercomputing conference by announcing a $325 million contract award from the U.S. Department of Energy (DOE) to develop and supply the next-generation supercomputers for the US Oak Ridge National Laboratory and the US Lawrence Livermore National Laboratory.
The DOE project is expected to be only one in the many ways that IBM and the OpenPOWER Foundation plan to bolster the industry. With more than 70 companies, including NVIDIA, Mellanox, Altera and Nallatech, the OpenPOWER Foundation has set in motion an open, integrated approach to driving the future of HPC.
HPCwire: Hi Ken – After much shake-up in 2014, IBM is approaching the supercomputing arena using a complexly new approach in 2015, and you’re at the center of it all. In a recent blog post, you went so far as to say that “IBM and the Open Power Foundation plan to revolutionize supercomputing.” What can you tell us about the new direction that IBM is taking?
What’s radically different for the supercomputing industry is that, until now, high performance chip architecture had been closed, not allowing for collaborative development. IBM’s decision to open our POWER architecture up through the OpenPOWER Foundation – now with over 90 members worldwide – has changed everything. Collaboration is driving real innovation.
This is especially true among the OpenPOWER members that participate in the HPC market, which includes industry leaders in memory, acceleration and software development and up on through to research universities and high profile HPC end users. Working together, these OpenPOWER members are building more tightly integrated solutions that will demonstrate breakthroughs in performance. And, we’ve only just begun. Keep your eye on us this year. You’ll see more collaboration, more innovation and more organizations join the OpenPOWER Foundation. Also, you’ll see more OpenPOWER-based products come to market.
HPCwire: IBM announced at Supercomputing that it had a $325 million dollar contract award from the U.S. Department of Energy for the next-generation supercomputers at Lawrence Livermore and Oak Ridge National Laboratories, using IBM’s “Data Centric” systems that the company says can move data to the processor at more than 17 petabytes per seconds. At a high level, how does this “Data Centric” model work, and what sort of breakthroughs can we look forward to with this level of data movement in the near term that might not have been available before?
From a high-level perspective, data-centric computing is a breakthrough approach that allows computations to take place either right at, or very close to, the place data resides. This covers the entirety of the infrastructure including storage and networks. Historically, data has had to physically move to processors for computations to occur. As volumes of data increase exponentially the simple task of moving data is beginning to create significant delays in the time it takes to get to a solution.
However, with a data-centric design in place, processing is in physical proximity to data throughout the system, storage and networking hierarchy, radically minimizing the amount of data movement and substantially curtailing the associated latency. This speeds time to solution and drives more value for the end user. From a technical perspective, data-centric computing requires a very tight integration of memory, accelerators and other system componentry. This is where the advantage of an open server architecture shines, as it allows for even tighter integration at all levels of the hardware and software stack.
HPCwire: Big Data, Open Source, and the Cloud are obviously big, overarching technology trends that are important to IBM. What specific trends within these arenas do you see as important to take note of in 2015?
It’s our belief that never before have the challenges and opportunities within the tech industry been greater than they are today. Innovation is key to harnessing the constant, incredible volume of data available today. More innovation is also needed to realize the full potential of cloud computing. Achieving innovation, however, requires a new level of openness and meaningful collaboration across the industry. No one company can, or should, control an innovation agenda. I would suggest, therefore, that “openness” is a key trend to watch in 2015.
Open Source is a long-standing sister effort complementing our novel activities on open hardware as characterized by OpenPOWER. The combination of Open Source with OpenPOWER presents the market with a unique set of innovations. We think that will help accelerate the pace of innovation in HPC with remarkable benefits accruing across the user community.
I suspect we will see continued enthusiasm for Linux, more open collaboration among unlikely partners and formal cooperation taking place among like-minded organizations in the open development community.
With respect to Cloud and Big Data, the market is beginning to signal a more vigorous acceptance of Cloud for HPC motivated by a desire to reduce or avoid management complexity of operations and infrastructure, but also to help mitigate the financial exposure of uneven capacity requirements. Issues associated with Big Data will be factored into the Cloud from the outset and have more impact on the architecture of the solution than will the impact of the algorithms. Big Data will also become seen as an integral part of any HPC solution and the value proposition of HPC will begin to shift away from a flops-centric viewpoint to a data-centric viewpoint. And finally, we expect to see that customers will start to put more emphasis on key elements of the solution beyond the server.
HPCwire: On a more intimate level, what can you tell us about yourself – personal life, family, background, hobbies?
I’m a career IBMer. I started in college as an intern and have been here ever since (31 years!) Besides all the great reasons to work at IBM, I’ve enjoyed the diversity in being able to have had a variety of different career experiences, while at the same time benefiting from working within one of the most respected corporate cultures in the world. I have a wife and three sons. One son recently graduated from college and another will be graduating this year, so they will both soon be “off the payroll.” My youngest son is still in high school, so any grand visions of an “empty nester” lifestyle will have to wait a bit.
In addition, I’m a life-long baseball and softball player, only recently slowing down due to age. However, I continue to golf, run and ski regularly. Increasingly, I look forward to getting more active “on the water” as time permits.
HPCwire: One last question – What can you share about yourself that you can share that you think your colleagues would be surprised to learn?
I’m not sure there’s anything that would completely surprise my colleagues, but I do have a tendency to engage in activities or new career moves that are repeatedly outside of my comfort zone. The last five jobs in IBM have had absolutely nothing at all to do with the preceding roles. So, my career growth has been based on varied experiences.
On a personal front, I’ve had opportunities to sky dive, fly a plane, bungee jump and car racing, among many other activities. So, I haven’t really followed a standard, conservative pattern to my life journey. When I eventually retire, I suspect it’s only going to get even more varied.