Visit additional Tabor Communication Publications
March 10, 2009
Pervasive DataMatcher designed to aid compliance and threat assessment and help detect fraud, money laundering, corruption
AUSTIN, Texas, March 10 -- Advancing its emerging leadership in next-generation data-intensive applications processing and analytics, Pervasive Software Inc., today introduced Pervasive DataMatcher, a highly accurate and lightning-fast high-performance data matching solution. Pervasive DataMatcher, launched in conjunction with FOSE 2009 at the Walter E. Washington Convention Center in Washington, D.C., is designed to help organizations detect fraud, money laundering and corruption as well as enable threat detection and law enforcement applications, compliance monitoring and master data management (MDM).
Organizations of many types suffer huge costs from the combinations of fraud, corruption and non-compliance issues. These organizations regularly need to process staggering amounts of data from multiple data sources that are often inaccurate, inconsistent and contain duplicate records. Pervasive DataMatcher detects data redundancies and correlates records to deliver fast, precise analytic results with rapid ROI.
"The massively parallel-processing horsepower of the Pervasive DataRush data processing engine combined with our matching capabilities forms a powerful solution for translating massive amounts of raw data into actionable intelligence," said Pervasive DataRush General Manager Mike Bryars. "Pervasive DataMatcher provides robust, innovative capabilities valuable to financial, insurance, healthcare, law enforcement and homeland security organizations that want to leverage powerful, next-generation analytics for detecting fraud and corruption, complying with anti-money laundering controls, and security and compliance monitoring."
The seemingly straightforward task of data matching on large disparate datasets is a daunting challenge for many IT organizations. Even with a fairly small dataset, comparing each record to every other record can generate an overwhelming amount of data. For example, comparing every record of a 100,000-row dataset would involve nearly 5 billion record comparisons.
Pervasive DataMatcher is designed to scale seamlessly on large, complex datasets with the ability to match on any or all fields in a dataset, including fuzzy matching. Pervasive DataMatcher is built on top of the patent-pending Pervasive DataRush processing engine, enabling the solution to crunch through massive datasets quickly and accurately on commodity multicore hardware.
As data volumes continue to explode, organizations struggle to detect patterns and extract meaningful insights. "Data Governance is key to delivering high integrity, relevant data, which forms the foundation for extracting knowledge to drive good business decisions," said Mark Beyer, Gartner Research VP. "Users increasingly -- and rightfully -- demand access to relevant data that has undergone complete, meaningful, efficient preparation to get to the right answers faster. Effective data matching capabilities is one of the keys to preparing information along with cleansing, parsing, standardization, profiling and others."
Pervasive DataMatcher delivers:
Pervasive DataMatcher is currently available. For additional information, visit www.pervasivedatarush.com/.
About Pervasive Software
Pervasive Software (NASDAQ:PVSW) helps companies get the most out of their data investments through embeddable data management, agile data integration software and revolutionary next generation analytics. The embeddable PSQL database engine allows organizations to successfully embrace new technologies while maintaining application compatibility and robust database reliability in a near-zero database administration environment. Pervasive's multi-purpose data integration platform accelerates the sharing of information between multiple data stores, applications, and hosted business systems and allows customers to re-use the same software for diverse integration scenarios. Pervasive DataRush is an embeddable high-performance software platform for data intensive processing applications such as claims processing, risk analysis, fraud detection, data mining, predictive analytics, sales optimization and marketing analytics. For more than two decades, Pervasive products have delivered value to tens of thousands of customers in more than 150 countries with a compelling combination of performance, flexibility, reliability and low total cost of ownership. Through Pervasive Innovation Labs, the company also invests in exploring and creating cutting edge solutions for the toughest data analysis and data delivery challenges. Robin Bloor, founder of Bloor Research and partner at Hurwitz and Associates, recently cited Pervasive as one of the 10 IT Companies to Watch in 2009. For additional information, go to www.pervasive.com.
Source: Pervasive Software Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.