Talking Grid with IBM’s Ken King

By Nicole Hemsoth

September 25, 2006

Ken King is vice president of Grid computing with responsibility for worldwide business line management of IBM's Grid computing initiatives, including business and technical strategy. In this GRIDtoday Q&A, which originally ran as a part of GRIDwire**, King discuss IBM's Grid and Grow program, the buzz on SOA and virtualization, and where the ceiling is for the Open Grid Forum.


GRIDtoday: First, how's the Grid computing business doing at IBM? Do you have any big news on the horizon?

KEN KING: We have been very pleased with our Grid business. Our Grid strategy has allowed us to work with customers to drive new levels of innovation, whether it be by solving problems they couldn't solve before, or allowing them to offer new services inconceivable before or transforming a business process or how something is accomplished within a given company. We are also pleased with how Grid is driving new enhancements to our solutions and middleware portfolio. In terms of big news on the horizon, we continue to enhance our Grid middleware portfolio. Our announcement this week with the Tivoli Dynamic Workload Broker is one such example.

We also are seeing a lot of interest in solving the information challenge, and we have worked with clients to implement information grids either to deliver information faster to remove computation bottlenecks or to create a federated view of data to improve collaboration or to gain new levels of business insight from a unified view of the data. SOA continues to be a key focus for us as well, with Grid being a key means of building a dynamic infrastructure that supports a service-oriented architecture and with how you match resources-either execution engines or information, to services and dynamic applications. And last is our big focus on expanding the ecosystem with our partner programs which will continue to help expand the adoption of Grid with all sizes and types of customers. 

Gt: Can you speak a little about IBM's line Grid solutions/programs, specifically Grid and Grow?

KING: We are expanding our offerings, solutions and programs as we see opportunity in the market place. We started in conventional grid use — high-performance computing, research, academia and philanthropy — and we have led the way in helping customers of all types and sizes, across all industries, leverage Grid for business value. In order to help create an “on ramp” for Grid, we announced Grid and Grow in May of 2005. Since then, we have expanded the program extensively. We have modified the offering to include the SMB market, where Grid opportunities are only really starting to emerge and we are taking the basic offering and modifying the bundles and service offerings to make very specific Grid solutions to solve specific industry pain points, as we did with the Grid and Grow for Actuarial Analysis announcement we made this summer.

In the meantime, we are putting great focus on the “Grow” part of Grid and Grow. We are finding, in the spirit of the program, that once clients incorporate Grid technology, they want to expand either addressing the need for additional application areas, larger grids, or looking at the benefits that can be gained from data grids. We have over 80 different Grid ISVs that provide specific software and middleware for us to offer to our clients for them to leverage and extend their Grid installations.

Further, we aren't focused solely on Grid and Grow bundles. We continue to work with enterprise and mid-market clients for customized Grid solutions as well as our integrated solutions such as our Grid Medical Archive Solution, Optimized Analytic Infrastructure for Financial Sector and IT Resource optimization for Engineering. We are also very proud of our work with the World Community Grid and other research and educational based grids like our work with SURAgrid, LA Grid and “Big Red” at Indiana University, all of which we have announced in the last year.

Gt: What kind of demand have you seen for Grid and Grow, and have specific verticals you've targeted responded more so than others?

KING: As I mentioned, we've seen good interest in Grid and Grow. It is probably too early to talk about specific vertical reception being stronger or weaker, however we are seeing strong interest from financial services sector, insurance, industrial sector, government and education. We are just starting to hear of more interest from retail, but that continues to mature. The financial sector has typically led the pack and is now starting to look at how they can better integrate data with compute grids. Further, industries such as the insurance and financial sectors are challenged with new regulatory requirements that many times require much more complex models with answers delivered much faster then in the past. Secondly, part of our objective with Grid and Grow was to insure we had an offering that would enable our channel partners. This has proven very effective with many new partners getting engaged with Grid and expanding the skills available to our clients.

Gt: What are your thoughts on “buzz” technologies like virtualization and SOA? How do they factor into the Grid landscape and how is IBM addressing them in its Grid strategy?

KING: Grid was once a “buzz” technology — in some respects it still is. That being said, adoption of Grid computing is growing exponentially year-to-year. At IBM, we see a close intersection between Grid and virtualization and SOA with emphasis on delivering on the promise of a flexible, scalable, IT infrastructure. SOA and Grid computing are natural partners. SOAs give organizations the ability to respond rapidly to evolving business requirements by leveraging existing value-add processes as discrete services; Grid computing provides the virtual service infrastructure that will guarantee the availability of these services regardless of the demand placed upon them. It’s really very synergistic and will be even more so as SOA continues its momentum. From a virtualization perspective, many clients start by virtualizing their servers, but quickly realize they can gain optimal business value by taking the next step (leveraging Grid technology) and addressing the virtualization of applications, services and workloads with Grid.

Gt: Another area that has been getting a lot of attention lately is application virtualization and applying the scalability and manageability of Grid computing to transactional applications. How important is this trend in terms of bringing Grid to a larger audience? What is IBM doing along this front?

KING: Transaction Processing is a key workload challenge many customers are faced with. Improving how these workloads are scheduled and by combining this core capability with the virtualization foundation and intelligent policy-based workload management, you can effectively consolidate OLTP workloads on fewer infrastructure resources. This consolidation has two important benefits. First, it can help lower TCO. Second, it can free your infrastructure to support new types of applications. Typically, OLTP, computationally intensive and batch workloads have been run across separate, dedicated infrastructures. So our focus with our WebSphere Extended Deployment (XD) offering is to not only address the middleware necessary to deliver transactional grids, but to also provide the tools and infrastructure for building a true business grid, which combines all of these workloads into a single virtualized grid, taking advantage of the autonomic capabilities of WebSphere XD to schedule work on idle computer resources and across heterogeneous operating environments. It’s really a very hot market segment which we’ve been very successful with and we continue to enhance WebSphere XD to address this burgeoning market.

Gt: Moving on, I'm interested in your thoughts on the formation of the Open Grid Forum. How successful do you believe the organization will be in developing widely accepted Grid standards?

KING: IBM has been a leader with GGF and we are excited to see the convergence with EGA. We think the sky is the limit for the success of the OGF. If you look, across the IT industry, any time organizations are able to converge with the goal of focusing on open source and open standard computing, adoption and success and migration follow. IBM is no stranger to open standard and open source computing. We were the first company to endorse Linux and we have countless milestones and contributions around the business for collaboration and standards work. Grid is a natural extension of this work. To be blunt, I think it is hugely important to see the OGF succeed. Open standards-based Grid solutions will enable customers to receive greater ROI and faster time-to-value for their heterogeneous Grid implementations, which is essential for Grid to grow from a departmental and data center specific focus to true enterprise optimization. It is good for our business and good for the industry as a whole.

Gt: Do you see varying standards between commercial and research sectors?

KING: The short answer is “no.” But before I go into a full explanation of this answer, let me first stress the importance of Grid standards. The very nature of Grid computing, which tries to take broadly distributed, heterogeneous computing and data resources and aggregate them into an “abstracted” set of capabilities, almost demands open standards for integration and interoperability. “True Grid” systems based on standards are capable of achieving the “scale out” promised by the “Grid vision” — where an application can exploit any processing capability required, access any data it needs and not be concerned with the specifics of configuration, management or infrastructure. IBM has always been a strong advocate for open industry standardization in information technology and has provided significant technical leadership in the development of Grid standards. You’ll see us continue to do so.

Grid standards are at a foundation level, at an infrastructure level, where they can address the customer and IT requirements of both commercial and research sectors. Although the “use cases” and customer scenarios in commercial vs. research/education organizations may be different, those requirements converge to the same set of standards at the infrastructure level. Research organizations, for example, have been traditionally more concerned with building collaborative grids, “extra-grids” that link multiple organizations. On the other hand, commercial grids often start at the departmental level and then expand outward across the enterprise. But the underlying requirements at the infrastructure level converge to the same set of standards and protocols. Furthermore, the GGF and EGA merger under the OGF umbrella further ensures convergence of software interoperability standards which will address both commercial and research requirements.

Gt: From the viewpoint of a major vendor, how important are standards and interoperability? How concerned are IBM's customers with this topic?

KING: We think, as I mentioned, interoperability is of the utmost importance to both IBM's long term success and to our clients, as well. The primary way we will ultimately deliver on our promise of SOA and on-demand infrastructures is with the pervasive adoption of standards. As clients make acquisitions, they will continue to be faced with integrating different technologies, hardware and approaches. Our clients don't want to create islands of technology and rip and replace each time they change applications, hardware or Grid middleware. So it's all about creating flexibility of choice and the ability to preserve your investments you made today for your infrastructure solutions of tomorrow. The only answer to this is with widely adopted and agreed upon standards.

Gt: GridWorld is serving as the coming out party for OGF. What do you expect to see from OGF from this point on and how involved will IBM be with the new organization.

KING: IBM has been very involved in building a Grid community from the very early stages of its inception. IBM will continue to be actively involved in all aspects of the Grid community moving forward. One of our primary goals is to build a strong and viable Grid ecosystem where many vendors and customers from all industries participate.

Looking to the future, we see OGF as the foundation of an open, collaborative community where researchers, educators, developers and commercial customers will all contribute and participate to address current and future requirements using Grid and virtualization technologies. OGF unifies the research, education and commercial sectors. We foresee more customer participation and collaboration with developers and solutions providers. While there will continue to be multiple workgroups, whether these will address standards, technical, marketing or other commercial requirements, the exchange of new ideas between different entities will accelerate both the rate of Grid deployments and the rate of adoption of Grid standards.

IBM will continue to play a very active role in OGF. We are core members of the Board of Directors, Advisory Committee, Technical Committee, Marketing Committee and many of the workgroups within OGF. We will continue to play an active leadership role into the future.

Gt: Finally, speaking of GridWorld, I'd like to discuss it a little. First, I'm wondering if you could speak a little about your personal participation as a speaker/presenter.

KING: I will be part of a panel discussing “The Impact of Grid on Business Today.” This is key to the adoption of Grid. The more customers can see the business value and quick ROI (not just technology value and cost savings) achieved from Grid implementations, the faster adoption will occur. So, I am always happy to educate and help customers understand the true value of Grid. I also did a presentation at GridWorld Tokyo, which articulated how Grid helps fuel innovation for our customers, which is critical in today’s business environment to drive competitive advantage.

Gt: How important do you think the event is for the Grid community? How important or, dare I say groundbreaking, is it to have the GGF, EGA and Globus communities under one roof for one big event.

KING: GridWorld is an extremely important event, breaking new ground into the areas of collaboration and customer participation. Not only is GridWorld a forum for the technical community to come together and collaborate in critical challenges surrounding Grid development and standards, but also it is an opportunity for commercial customers and enterprises to have their voices heard regarding their own challenges and requirements and directly interface with the technical community and solution providers.

There is no doubt that the increased number of Grid participants will also create additional challenges. For example, the logistics of collaboration in workgroups will become more complicated with a lot more people involved in a single event. But we have an opportunity to transform these challenges into positive outcomes for increased collaboration and continuous participation in the Grid community, whether that is smaller events or dedicated off site workgroup meetings. The key will be to use such a big event under one roof as a means of encouraging continuous and active involvement in all aspects of the Grid community in the future.

** GRIDwire, GRIDtoday's exclusive coverage from GridWorld 2006, can be seen at

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers


Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This