Talking Grid with IBM’s Ken King

By Nicole Hemsoth

September 25, 2006

Ken King is vice president of Grid computing with responsibility for worldwide business line management of IBM's Grid computing initiatives, including business and technical strategy. In this GRIDtoday Q&A, which originally ran as a part of GRIDwire**, King discuss IBM's Grid and Grow program, the buzz on SOA and virtualization, and where the ceiling is for the Open Grid Forum.

— 

GRIDtoday: First, how's the Grid computing business doing at IBM? Do you have any big news on the horizon?

KEN KING: We have been very pleased with our Grid business. Our Grid strategy has allowed us to work with customers to drive new levels of innovation, whether it be by solving problems they couldn't solve before, or allowing them to offer new services inconceivable before or transforming a business process or how something is accomplished within a given company. We are also pleased with how Grid is driving new enhancements to our solutions and middleware portfolio. In terms of big news on the horizon, we continue to enhance our Grid middleware portfolio. Our announcement this week with the Tivoli Dynamic Workload Broker is one such example.

We also are seeing a lot of interest in solving the information challenge, and we have worked with clients to implement information grids either to deliver information faster to remove computation bottlenecks or to create a federated view of data to improve collaboration or to gain new levels of business insight from a unified view of the data. SOA continues to be a key focus for us as well, with Grid being a key means of building a dynamic infrastructure that supports a service-oriented architecture and with how you match resources-either execution engines or information, to services and dynamic applications. And last is our big focus on expanding the ecosystem with our partner programs which will continue to help expand the adoption of Grid with all sizes and types of customers. 

 
Gt: Can you speak a little about IBM's line Grid solutions/programs, specifically Grid and Grow?

KING: We are expanding our offerings, solutions and programs as we see opportunity in the market place. We started in conventional grid use — high-performance computing, research, academia and philanthropy — and we have led the way in helping customers of all types and sizes, across all industries, leverage Grid for business value. In order to help create an “on ramp” for Grid, we announced Grid and Grow in May of 2005. Since then, we have expanded the program extensively. We have modified the offering to include the SMB market, where Grid opportunities are only really starting to emerge and we are taking the basic offering and modifying the bundles and service offerings to make very specific Grid solutions to solve specific industry pain points, as we did with the Grid and Grow for Actuarial Analysis announcement we made this summer.

In the meantime, we are putting great focus on the “Grow” part of Grid and Grow. We are finding, in the spirit of the program, that once clients incorporate Grid technology, they want to expand either addressing the need for additional application areas, larger grids, or looking at the benefits that can be gained from data grids. We have over 80 different Grid ISVs that provide specific software and middleware for us to offer to our clients for them to leverage and extend their Grid installations.

Further, we aren't focused solely on Grid and Grow bundles. We continue to work with enterprise and mid-market clients for customized Grid solutions as well as our integrated solutions such as our Grid Medical Archive Solution, Optimized Analytic Infrastructure for Financial Sector and IT Resource optimization for Engineering. We are also very proud of our work with the World Community Grid and other research and educational based grids like our work with SURAgrid, LA Grid and “Big Red” at Indiana University, all of which we have announced in the last year.

 
Gt: What kind of demand have you seen for Grid and Grow, and have specific verticals you've targeted responded more so than others?

KING: As I mentioned, we've seen good interest in Grid and Grow. It is probably too early to talk about specific vertical reception being stronger or weaker, however we are seeing strong interest from financial services sector, insurance, industrial sector, government and education. We are just starting to hear of more interest from retail, but that continues to mature. The financial sector has typically led the pack and is now starting to look at how they can better integrate data with compute grids. Further, industries such as the insurance and financial sectors are challenged with new regulatory requirements that many times require much more complex models with answers delivered much faster then in the past. Secondly, part of our objective with Grid and Grow was to insure we had an offering that would enable our channel partners. This has proven very effective with many new partners getting engaged with Grid and expanding the skills available to our clients.

 
Gt: What are your thoughts on “buzz” technologies like virtualization and SOA? How do they factor into the Grid landscape and how is IBM addressing them in its Grid strategy?

KING: Grid was once a “buzz” technology — in some respects it still is. That being said, adoption of Grid computing is growing exponentially year-to-year. At IBM, we see a close intersection between Grid and virtualization and SOA with emphasis on delivering on the promise of a flexible, scalable, IT infrastructure. SOA and Grid computing are natural partners. SOAs give organizations the ability to respond rapidly to evolving business requirements by leveraging existing value-add processes as discrete services; Grid computing provides the virtual service infrastructure that will guarantee the availability of these services regardless of the demand placed upon them. It’s really very synergistic and will be even more so as SOA continues its momentum. From a virtualization perspective, many clients start by virtualizing their servers, but quickly realize they can gain optimal business value by taking the next step (leveraging Grid technology) and addressing the virtualization of applications, services and workloads with Grid.

 
Gt: Another area that has been getting a lot of attention lately is application virtualization and applying the scalability and manageability of Grid computing to transactional applications. How important is this trend in terms of bringing Grid to a larger audience? What is IBM doing along this front?

KING: Transaction Processing is a key workload challenge many customers are faced with. Improving how these workloads are scheduled and by combining this core capability with the virtualization foundation and intelligent policy-based workload management, you can effectively consolidate OLTP workloads on fewer infrastructure resources. This consolidation has two important benefits. First, it can help lower TCO. Second, it can free your infrastructure to support new types of applications. Typically, OLTP, computationally intensive and batch workloads have been run across separate, dedicated infrastructures. So our focus with our WebSphere Extended Deployment (XD) offering is to not only address the middleware necessary to deliver transactional grids, but to also provide the tools and infrastructure for building a true business grid, which combines all of these workloads into a single virtualized grid, taking advantage of the autonomic capabilities of WebSphere XD to schedule work on idle computer resources and across heterogeneous operating environments. It’s really a very hot market segment which we’ve been very successful with and we continue to enhance WebSphere XD to address this burgeoning market.

 
Gt: Moving on, I'm interested in your thoughts on the formation of the Open Grid Forum. How successful do you believe the organization will be in developing widely accepted Grid standards?

KING: IBM has been a leader with GGF and we are excited to see the convergence with EGA. We think the sky is the limit for the success of the OGF. If you look, across the IT industry, any time organizations are able to converge with the goal of focusing on open source and open standard computing, adoption and success and migration follow. IBM is no stranger to open standard and open source computing. We were the first company to endorse Linux and we have countless milestones and contributions around the business for collaboration and standards work. Grid is a natural extension of this work. To be blunt, I think it is hugely important to see the OGF succeed. Open standards-based Grid solutions will enable customers to receive greater ROI and faster time-to-value for their heterogeneous Grid implementations, which is essential for Grid to grow from a departmental and data center specific focus to true enterprise optimization. It is good for our business and good for the industry as a whole.

 
Gt: Do you see varying standards between commercial and research sectors?

KING: The short answer is “no.” But before I go into a full explanation of this answer, let me first stress the importance of Grid standards. The very nature of Grid computing, which tries to take broadly distributed, heterogeneous computing and data resources and aggregate them into an “abstracted” set of capabilities, almost demands open standards for integration and interoperability. “True Grid” systems based on standards are capable of achieving the “scale out” promised by the “Grid vision” — where an application can exploit any processing capability required, access any data it needs and not be concerned with the specifics of configuration, management or infrastructure. IBM has always been a strong advocate for open industry standardization in information technology and has provided significant technical leadership in the development of Grid standards. You’ll see us continue to do so.

Grid standards are at a foundation level, at an infrastructure level, where they can address the customer and IT requirements of both commercial and research sectors. Although the “use cases” and customer scenarios in commercial vs. research/education organizations may be different, those requirements converge to the same set of standards at the infrastructure level. Research organizations, for example, have been traditionally more concerned with building collaborative grids, “extra-grids” that link multiple organizations. On the other hand, commercial grids often start at the departmental level and then expand outward across the enterprise. But the underlying requirements at the infrastructure level converge to the same set of standards and protocols. Furthermore, the GGF and EGA merger under the OGF umbrella further ensures convergence of software interoperability standards which will address both commercial and research requirements.

 
Gt: From the viewpoint of a major vendor, how important are standards and interoperability? How concerned are IBM's customers with this topic?

KING: We think, as I mentioned, interoperability is of the utmost importance to both IBM's long term success and to our clients, as well. The primary way we will ultimately deliver on our promise of SOA and on-demand infrastructures is with the pervasive adoption of standards. As clients make acquisitions, they will continue to be faced with integrating different technologies, hardware and approaches. Our clients don't want to create islands of technology and rip and replace each time they change applications, hardware or Grid middleware. So it's all about creating flexibility of choice and the ability to preserve your investments you made today for your infrastructure solutions of tomorrow. The only answer to this is with widely adopted and agreed upon standards.

 
Gt: GridWorld is serving as the coming out party for OGF. What do you expect to see from OGF from this point on and how involved will IBM be with the new organization.

KING: IBM has been very involved in building a Grid community from the very early stages of its inception. IBM will continue to be actively involved in all aspects of the Grid community moving forward. One of our primary goals is to build a strong and viable Grid ecosystem where many vendors and customers from all industries participate.

Looking to the future, we see OGF as the foundation of an open, collaborative community where researchers, educators, developers and commercial customers will all contribute and participate to address current and future requirements using Grid and virtualization technologies. OGF unifies the research, education and commercial sectors. We foresee more customer participation and collaboration with developers and solutions providers. While there will continue to be multiple workgroups, whether these will address standards, technical, marketing or other commercial requirements, the exchange of new ideas between different entities will accelerate both the rate of Grid deployments and the rate of adoption of Grid standards.

IBM will continue to play a very active role in OGF. We are core members of the Board of Directors, Advisory Committee, Technical Committee, Marketing Committee and many of the workgroups within OGF. We will continue to play an active leadership role into the future.

 
Gt: Finally, speaking of GridWorld, I'd like to discuss it a little. First, I'm wondering if you could speak a little about your personal participation as a speaker/presenter.

KING: I will be part of a panel discussing “The Impact of Grid on Business Today.” This is key to the adoption of Grid. The more customers can see the business value and quick ROI (not just technology value and cost savings) achieved from Grid implementations, the faster adoption will occur. So, I am always happy to educate and help customers understand the true value of Grid. I also did a presentation at GridWorld Tokyo, which articulated how Grid helps fuel innovation for our customers, which is critical in today’s business environment to drive competitive advantage.

 
Gt: How important do you think the event is for the Grid community? How important or, dare I say groundbreaking, is it to have the GGF, EGA and Globus communities under one roof for one big event.

KING: GridWorld is an extremely important event, breaking new ground into the areas of collaboration and customer participation. Not only is GridWorld a forum for the technical community to come together and collaborate in critical challenges surrounding Grid development and standards, but also it is an opportunity for commercial customers and enterprises to have their voices heard regarding their own challenges and requirements and directly interface with the technical community and solution providers.

There is no doubt that the increased number of Grid participants will also create additional challenges. For example, the logistics of collaboration in workgroups will become more complicated with a lot more people involved in a single event. But we have an opportunity to transform these challenges into positive outcomes for increased collaboration and continuous participation in the Grid community, whether that is smaller events or dedicated off site workgroup meetings. The key will be to use such a big event under one roof as a means of encouraging continuous and active involvement in all aspects of the Grid community in the future.

** GRIDwire, GRIDtoday's exclusive coverage from GridWorld 2006, can be seen at www.gridtoday.com/gridworld/06/index.html

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first planned U.S. exascale computer. Intel also provided a glimpse of Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutting for the Expo Hall opening is Monday at 6:45pm, with the Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Read more…

By Doug Black

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft the first large public cloud vendor to offer the IPU designe Read more…

By George Leopold

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This