Data Integration Goes Dynamic

By Dennis Barker

September 17, 2008

Like death and taxes, the perplexities of data integration are unavoidable. The never-ending data explosion is just one factor. Mergers and acquisitions, globalization, outsourcing, partnerships, and regulations all contribute to the massive pile-up of data from different repositories, business systems and applications, in different formats and languages, structured and unstructured. Giving people a single, cohesive view into what would otherwise be a massive quagmire is not easy.

“Data integration is always a challenge and will remain that way because data grows exponentially and new types always have to be added to the mix,” says Krishna Roy, enterprise software analyst for The 451 Group. “Keeping up with the growth of data and enabling it to be not only integrated, but cleansed, as well, is the big challenge.”

According to Roy and other enterprise IT analysts, one of the leading companies meeting that challenge is Informatica. The company’s flagship, PowerCenter, is a platform for accessing and integrating data from different business systems and repositories, then sharing that data throughout the enterprise. A grid version lets organizations distribute data integration tasks in a scalable, resilient, high-performance environment.

PowerCenter, like Informatica, has evolved over the years to handle all the different chores involved in an integration undertaking, including data migration and replication, synchronization, master data management, governance and standardization. “We started out to help with the automation of data warehouses, taking data from lots of different sources and providing a holistic view of it, whether it came from mainframes, packaged applications, databases, message queues, all the different feeds, and even data outside the enterprise,” says Adam Wilson, senior vice president of product management and marketing. During the past eight or so years, the company has expanded beyond data warehousing into “broader data integration, including data from outside the enterprise, data that’s structured or unstructured,” Wilson says. “It’s really broader business intelligence we’re focusing on.”

The increasing need for better data integration is being driven, Wilson says, by companies trying to “get proactive about governing their data, and providing access to that data,” plus globalization, which brings not only new data systems to contend with, but also new sources, such as partners and providers, not to mention new formats, character sets and regulations.

After Virgin Media in the United Kingdom acquired ntl and Telewest as part of its expansion into online and telephony services, the firm had 20 new data sources and 5.6 million customers to consolidate into their operational systems. “They said they really needed a way to pull together all this data that’s smeared across all these different systems,” Wilson says. “They wanted a single view of all their 10 million customers, and that information was kept in an Oracle data warehouse, various customer management systems, and running on a mix of hardware and operating systems.” Taking advantage of PowerCenter’s real-time capabilities, Virgin was able to build a consolidated data integration hub that delivers updated information, resulting in better customer service and more accurate market analysis.

Getting on the Grid

With PowerCenter 8 in 2006, Informatica extended its integration capabilities to the grid, enabling customers to distribute tasks across multiple processor nodes while taking advantage of commodity hardware, scalability and high availability. With the Enterprise Grid Option, Informatica says, it has developed a grid system that understands data integration tasks and the resources they need, and is able to adapt accordingly. PowerCenter 7 — way back in 2003 — included some basic grid support, but the version 8 and the Enterprise Grid Option brings the platform’s whole shebang to a grid environment: universal access to a wide range of applications and legacy systems; data delivery functions; and tools for capturing, cleansing, managing and migrating data.

In the PowerCenter grid, data integration services, repository services, and logging services run on one more nodes (logical representations of a physical machine). On each node, a service manager handles the services assigned to that node. Each manager keeps statistics on available CPU usage, available memory and number of running threads. What Informatica calls “gateway nodes” control the routing of service manager requests. The gateway keeps an eye on the availability of other nodes, manages application services, and makes sure services execute by dynamically redirecting to a secondary node if the primary node is down. Administrators can specify the resources available to run a task on each node, and also assign higher priority to the most important integration operations. A GUI Web console provides central control for adding nodes or services and managing resources across the grid.

Adaptive load balancing dynamically assigns and executes tasks on the basis of resource availability or according to the resource requirements of a particular data integration task. Dynamic partitioning adjusts the parallel execution plan when nodes are added or dropped. PowerCenter’s load balancing technology is platform-agnostic and can interoperate across a heterogeneous grid environment (Linux, Windows, Unix; different CPU speeds; 32- and 64-bit software; varying memory capacities among nodes; etc.).

To guarantee integration tasks are processed according to business priorities, PowerCenter uses resource reservation to set aside certain nodes to perform certain tasks, so that a CPU-intensive job, for example, would go to a node with the most CPU power and memory; the administrator can then make sure that node gets the job. Admins can assign different service level values to tasks, based on priority. The “gang dispatcher” makes sure that when tasks are broken up into subtasks to process on different nodes, all subtasks are executed at the same time. Resource threshold provisioning is meant to help avoid overloading of nodes by controlling the number of threads, amount of memory that can be consumed, and the number of concurrent data integration tasks; the company says this allows load spikes to be processed without degrading overall performance.

“We have created a data services platform that can deploy data integration, metadata, and profiling and connectivity services out in a grid,” Wilson says. “Customers typically use us in one of two scenarios. We’re moving terabytes and terabytes in incredibly short load windows, so we break the work down, parallelize it, and spread it across the grid, using rules-based or cost-based optimization. In the other scenario, customers have built data services within our Informatica software and they are calling us in a request-and-response manner, where we have to handle thousands of requests for those data services, so we scale up to distribute those requests to nodes that have been identified in our domain.”

“In both throughput and concurrency situations, our ability to distribute workloads gracefully across a grid is our way to ensure that customers’ service level agreements are honored,” Wilson says. “That’s why we’ve incorporated things like automatic load balancing and resiliency, and things like letting every task have its own service level based on its business priority.”

Millions of Transactions a Day

LinkShare, a Web marketing company with thousands of clients, needs to track performance, order transactions, lead generation, clicks on ads, content inventory, and other items. As the company has acquired customers and partners, data volumes have grown to the point that it is capturing information from more than 400 sources and processing more than 300 million aggregate transactions a day. PowerCenter, which LinkShare has been using since 2001, extracts and transforms all this transaction data, puts it into a near real-time data store, then extracts it again into an enterprise data warehouse. PowerCenter provides the processes that enable the different LinkShare systems (for invoice processing, order processing, payments, reporting, analytics, etc.) and applications (MySQL, Oracle 10g, IBM DB2) to talk to each other.

Customers are feeding data into the system around the clock, so LinkShare needs to ensure high-performance processing of large data volumes in order to meet service level agreements. The company taps PowerCenter to load-balance more than 700 sessions that run in short batch-processing windows throughout the day.

LinkShare had been a PowerCenter user since 2001, but in 2007 moved its operation to the Enterprise Grid Option while also moving to 64-bit hardware. With their previous cluster, the IT team had to manually configure load balancing and resource utilization. The Enterprise Grid Option takes care of those functions dynamically, without any re-coding of applications, Wilson says.

‘A Good Place to Be’

By exploiting grid architecture and its own platform and technologies, Informatica is able to deliver the capabilities required for real-time processing of data integration tasks. “The leading challenge to data integration practitioners today is the need to move data in real time or on demand,” say Philip Russom, senior manager of The Data Warehouse Institute’s TDWI Research wing. “Most data integration tools and user practices were originally designed for latent batch processing in nightly windows. Vendor tools have come a long way in this regard, and users have educated themselves in more dynamic applications of data integration.”

“Informatica is one of the leading vendors in the data integration space, along with Business Objects, IBM, Microsoft, Oracle and SAS. Informatica has shown leadership in recent years by championing new practices and techniques like data integration competency centers, service-oriented architecture, embedded data quality functions, and business-to-business data integration,” Russom says.

The latest PowerCenter 8.6 “sports a number of enhancements that form part of an ongoing mission to extend its usage beyond extract/transform/load (ETL) projects for data warehousing,” says The 451 Group’s Roy. “The advent of the Real-Time Edition, the first tools for analysts and data stewards, a new B2B Data Transformation engine and a data-loader service for integrating off-premise data managed by Salesforce.com with on-premise data, are some of the improvements in 8.6 crafted with this mission in mind.”

The company continues to “retain the mantle of largest independent data integration vendor,” according to a report issued in the August by The 451 Group (while noting its competitors include IBM and Oracle, rivals with whom it also shares technology). “Informatica has been increasing revenue by 20 percent year-on-year for several years now — a growth rate it maintained for the first six months of 2008 when sales rose to $217.5 million from $181.4 million in the same six months of 2007.”

According to Informatica’s Wilson, the company’s installation base “is growing 20 percent year over year while enterprise software is growing 8 to 9 percent. Ninety of the Fortune 100 companies are our clients.”  The biggest problem Informatica solves today is “ETL into the warehouse,” Roy says. “However, data migration, compliance/data governance, and master data management are all areas in which Informatica increasingly plays. I think being the Switzerland of the data integration world is a good place to be given that integration software needs to be independent of databases, applications, and so on.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ABB Upgrades Produce Up to 30 Percent Energy Reduction for HPE Supercomputers

June 6, 2020

The world’s supercomputers are currently allied in a common goal: defeating COVID-19. To analyze the billions upon billions of molecules that might produce helpful therapeutics (or even a vaccine), an unimaginable amou Read more…

By Oliver Peckham

Supercomputers Take to the Solar Winds

June 5, 2020

The whims of the solar winds – charged particles flowing from the Sun’s atmosphere – can interfere with systems that are now crucial for modern life, such as satellites and GPS services – but these winds can be d Read more…

By Oliver Peckham

HPC in O&G: Deep Sea Drilling – What Happens Now   

June 4, 2020

At the beginning of March I attended the Rice Oil & Gas HPC conference in Houston. That seems a long time ago now. It’s a great event where oil and gas specialists join with compute veterans and the discussion tell Read more…

By Rosemary Francis

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCSA’s next generation of supercomputers post-Blue Waters,” Read more…

By John Russell

Dell Integrates Bitfusion for vHPC, GPU ‘Pools’

June 3, 2020

Dell Technologies advanced its hardware virtualization strategy to AI workloads this week with the introduction of capabilities aimed at expanding access to GPU and HPC services via its EMC, VMware and recently acquired Read more…

By George Leopold

AWS Solution Channel

Join AWS, Univa and Intel for This Informative Session!

Event Date: June 18, 2020

More enterprises than ever are turning to HPC cloud computing. Whether you’re just getting started, or more mature in your use of cloud, this HPC Cloud webinar is an excellent opportunity to gain valuable insights and knowledge to help accelerate your HPC cloud projects. Read more…

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCS Read more…

By John Russell

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This