A Case for Grid Computing in Insurance

By By N.R. Veeraragahavan and Nandha Kumar, Infosys Technologies

February 6, 2006

Insurance companies around the globe are faced with challenges of decreasing margins. Increased competition from banks and brokerages is not making life any easier for the insurance companies. Expenses have been more than the revenue growth for many of the financial institutions. Cutting down expenses is one of the mandates for all the companies in order to maintain their profitability. At the same time, the companies' need for computing resources is increasing day by day due to the ever demanding customer needs, better analytics leading to better risk management and correct decisions, and increasing regulatory requirements.

Against this backdrop, Grid computing seems to be the key for many of the challenges faced by insurance companies.

Grid computing concept is defined by Ian Foster as the controlled and coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations. This sharing of resources, ranging from simple file transfers to complex and collaborative problem solving, is accomplished within controlled and well-defined conditions and policies. The dynamic grouping of individuals, multiple groups or organizations that defined the conditions and rules for sharing are called virtual organizations.

Grid computing focuses on the resource sharing, coordination and high performance orientation. It is all about resource sharing by integrating services across distributed, heterogeneous, dynamic virtual organizations formed from disparate sources within a single institution and/or external organization.

The concept of virtual organization is the key to Grid computing. A virtual organization is defined as a dynamic set of individuals and/or institutions defined around a set of resource sharing rules and conditions. All these virtual organizations share some commonality among them, including common concerns and requirements.

Why Grid Computing?

Grid computing provides consistent, inexpensive access to computational resources (desktops, servers, supercomputers, storage systems, data sources, instruments and people) regardless of their physical location or access point. As such, The Grid provides a single, unified resource for solving large-scale compute and data intensive computing applications. A look at some of the numbers makes the argument in favor of grid computing all the more compelling and convincing.

Over a 24-hour period, the average UNIX server is actually “serving” less than 10 percent of the time. For mainframes, the figure is about 40 percent; and for desktops, less than 5 percent. This means that an average desktop is idle for an unbelievable 95 percent of the time. This translates into hundreds of millions of dollars that could be saved in the form of valuable processing time by the use of Grid technology.

One of the compelling reasons for enterprises to adopt Grid computing is to achieve infrastructure and information virtualization. Today's industry, wrapped up with the idea of discrete applications running on discrete computers, is constantly haunted by issues of capacity planning, efficient management of system resources and robust performance of various systems (see insert 1).

Insert 1

The Department of Trade and Industry (DTI), UK has invested £1 million in the IECnet program in order to promote grid computing and raise the awareness of the benefits of grid computing and overcome board level ignorance. Grid computing would facilitate the delivery mechanisms for existing services.

Source: http://news.taborcommunications.com/msgget.jsp?mid=452685&xsl=story.xsl

Some of the benefits of Grid computing include:

  • Aggregating the performance of many systems to build a virtual super computer.
  • Creation of a virtual organization enabling sharing of the applications and data — linking together of information located at multiple locations.
  • Application optimization.
  • Optimizing and managing the workload across multiple resources.
  • Effective and efficient processing  — stignificant reduction in processing time.
  • Reduce the total cost of ownership.

The financial services industry can greatly benefit from the use of Grid computing. A few insurance companies (See insert 2 ) and employee benefit providers (see insert 3) have already introduced Grid computing and greatly benefited from the technology. The Royal Bank of Scotland (RBC) has replaced an existing application and Grid-enabled it based on IBM xSeries and middleware from IBM Business Partner Platform Computing. The integrated solution allowed RBC Insurance to reduce by 75 percent the time spent on job scheduling, and by 97 percent the time spent processing an actuarial application. Based on that success, RBC Insurance is now looking to expand the IBM and Platform Computing solution across additional applications and business units to improve efficiencies and to increase customer satisfaction.

“IBM and Platform Computing Grid-enabled our valuation application and supporting infrastructure for immediate results,” said Keith Medley, head of insurance technology at RBC Insurance. “With the integrated solution, we have been able to reduce a 2.5 hour job to 10 minutes, and an 18 hour job to 32 minutes. We are now looking to move to a production environment. By virtualizing applications and infrastructure, we anticipate being able to deliver higher quality services to our clients faster than ever before, which will significantly impact our competitive edge.

Insert 2

Hartford life is among the first life insurance companies to implement Grid computing. The complexities of the variable annuity products called for efficient risk management, which translated into a demand for high computing power. Management realized that Grid computing could solve their pressing business needs. Using Condor, a Grid computing software from the University of Wisconsin, Hartford completed the project on its own. Now they are able to simulate market behavior, policy holder behavior for the next 15-20 years and map risks. Grid computing has helped Hartford life in many ways. It has helped them to build a very sophisticated risk management capability. Other benefits include scalability with increasing volume, stability while expanding the network and cost savings. The challenge it faced was lack of expertise on the subject matter since it is a newer technology. Though Grid computing is a viable technology, organizations need to possess right engineering acumen for a smoother implementation.

Source: www.loma.org/res-02-05-grid.asp

Insert 3

Global HR and benefit plan provider Hewitt demonstrated that grids can solve their business problem. Hewitt had an application hosted on a main-frame system that performed complex pension calculations for 5.5 million of Hewitt's customers. Usage volume would fluctuate wildly due to rumors on mergers, acquisitions or early retirement programs circulated, and volume could double without warning. These pension calculations at such high volumes were consuming a lot of computing power of the host system and becoming very expensive to run on a main-frame. The application made an ideal grid candidate for another reason, too. Apart from querying the database in an efficient way to collect the data needed to perform its calculation, the application interacted very little with other IT systems and data hence the problem could be divided into smaller pieces and can be executed at different locations. In Hewitt now the grid works like a giant CPU and returns the end result of the pension calculation query to the mainframe. The introduction of grid computing resulted in a reduction of 90% of the transaction cost.

Sources: www.networkworld.com/supp/2004/ndc6/1025hewitt.html

Issues

Most of the Grid computing solutions are home grown. Vendor support for Grid computing has not matured yet. Grid computing has limited areas of deployment. Not all problems can be solved using Grid infrastructure. Grid is more suited to situations where the problem can be split into smaller problems. Firms have also found implementing Grid computing has had its problems and technical hurdles have been high.

Conclusion

An ideal grid should ensure that the resources of the different systems are used seamlessly and without affecting the performance of the individual users systems. It also goes without saying that enough security features are built into the system. Grid technology planned and executed properly could bring in enormous benefits to the insurance industry. Optimal usage of the existing resources will be the driving force that will ensure competitive advantage and provide leadership position.

About the Authors

N.R. Veeraraghavan is a senior consultant in the insurance domain of Infosys Technologies Ltd, India. Nandha Kumar is a consultant in the insurance domain of Infosys Technologies Ltd, India.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Microsoft, Nvidia Launch Cloud HPC

November 20, 2019

Nvidia and Microsoft have joined forces to offer a cloud HPC capability based on the GPU vendor’s V100 Tensor Core chips linked via an Infiniband network scaling up to 800 graphics processors. The partners announced Read more…

By George Leopold

Hazra Retiring from Intel Data Center Group, Successor Unknown

November 20, 2019

This article is an update to a story published earlier today. Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring after more than 24 years at the compa Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU-accelerated computing. In recent years, AI has joined the s Read more…

By John Russell

SC19 Student Cluster Competition: Know Your Teams

November 19, 2019

I’m typing this live from Denver, the location of the 2019 Student Cluster Competition… and, oh yeah, the annual SC conference too. The attendance this year should be north of 13,000 people, with the majority attende Read more…

By Dan Olds

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, remain in first and second place. The only new entrants in t Read more…

By Tiffany Trader

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX-1 compute power in an air conditioned, water-cooled ScaleMa Read more…

By Doug Black

Hazra Retiring from Intel Data Center Group, Successor Unknown

November 20, 2019

This article is an update to a story published earlier today. Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Governm Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP), Diachin is also... Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This