Introduction
Financial services firms are examining every aspect of their infrastructures to squeeze any delays out of their end-to-end computational workflows.
This is particularly an issue when it comes to research and post-trade analysis work. In today’s volatile markets, such work requires faster results using much more granular information derived from ever larger data pools, so companies can make more intelligent and profitable decisions in shorter time frames.
Essentially, financial services research and post-trade analysis requires an infrastructure that can accommodate Big Data, while speeding computational results. As such, on the acceleration of results front, great attention has been placed on using faster, more powerful processors, complemented by high-performance storage and networking elements. For instance, it is quite common to see InfiniBand switches and adapter cards used to connect servers to storage in financial services computational infrastructures.
On the Big Data front, most of the attention has centered on the use of systems that can scale to meet the growing data volumes used in post-trade research. To that end, great emphasis has been placed on systems that can add large data volumes to an existing storage infrastructure without disrupting production systems.
However, there is still room for improvement. For example, improving application performance has typically been achieved by the use of large cache and flash memory. But to produce better research and post-trade results today requires processing larger volumes of more granular data. This Big Data realistically cannot be housed in cache.
Additionally, better results can be obtained by integrating systems to avoid the common siloing of analytics by business units. Such islands of analytics have sprung up as groups have relied on hardware and an infrastructure optimized for their particular analytics applications.
To overcome these limitations, some companies are placing new emphasis on improving network and storage performance with the goal of moving beyond the limits of cache memory and opening up their modeling and analysis efforts to bring use greater volumes of data. To accomplish this requires storage solutions that scale both in volume and performance, allowing for the optimization of computational workflows.
What’s needed for improvement?
Financial services companies have long been at the forefront in the use of powerful IT solutions, and they have scaled out their computing environments to meet the rising demand. Today, there is a need for faster solutions, which incorporate more data to analyze more post-trade risk scenarios in order to pursue more aggressive strategies with lower risk.
To carry this out requires dealing with the issues introduced by Big Data and the use of sophisticated analysis routines. Both place more demands on an infrastructure and complicate attempts to improve application performance.
Areas such as derivative analysis, actuarial analysis, and portfolio risk measurement, to name a few, require more and more compute resources to stay accurate and competitive. And there is a need to combine information about options, futures, forwards, and interest-rate swaps, for example, to manage pricing, hedge risks, and identify opportunities for arbitrage. This requires heavy usage of floating-point arithmetic operations, repeated over millions of variables.
Variants on this type of analysis often are carried out simultaneously in different parts of an organization. For example, a firm that also offers insurance might want to measure and mitigate risk and tailor its products for different individuals or groups. This requires mining thousands of data points to produce accurate models.
The end result is that over time most organizations have ended up with islands of analytics. And each analysis routine and its associated data has different throughout and I/O requirements. In the past, it was quite common to select systems that matched each application’s performance requirements.
As the number of routines used in post-trade analysis grows and changes over time to address new risks, this island of analysis approach is hard to sustain. The reason: Islands of analytics cost more to run. OPEX costs include management time, electricity to power and cool devices, data center rack space, software licenses, service contracts, and other items. Multiple islands of operations will likely add duplicate or underused equipment and compute resources that could be shared if operations were unified.
Islands of analytics also require duplicate data entry, which is an error-prone process. This can introduce errors in the research and analysis.
As a result, the islands should be eliminated and the analysis routines must use a common infrastructure. Additionally, the infrastructure must optimize storage performance to meet the increased demands caused by server virtualization, where many applications need simultaneous access to data. And the infrastructure must address the throughput and I/O issues related to analyzing larger datasets with more powerful processors.
What’s needed is a new focus on storage. Specifically, the consolidation of analytic islands can be achieved using a solution, which leverages advances in design, to scale in both volume and performance, with a robust file system to accommodate the growing volumes of data needed in post-trade analytics workflows.
DDN as your technology partner
Traditional storage solutions cannot deal with today’s increased requirements for storage performance and scalability. To reduce latency and improve the performance of post-trade analytics applications, DirectData Networks (DDN) storage solutions offer the massive scalability required for Big Data, enabling the convergence of data storage and processing.
DDN solutions are already being used in many financial services organizations for their research and post-trade analytics operations. The solutions are optimized for I/O and throughput, adaptable to any workload. And they are extremely scalable in capacity and density. Based on its Storage Fusion Architecture, the DDN SFA 10K line offers a number of firsts including 15 GB/s host throughput for reads AND writes, 120 Terabytes of storage per drawer (1.8 PB per rack), and the ability to scale to 1,200 drives per array for 3.6 PB per system. Furthermore, DDN lets organizations optimize performance versus cost offering SSD, SAS and SATA Intermix drives.
The DDN solutions thus offer the needed scalability without compromising on performance to help eliminate islands of analytics, thus allowing consolidation of market research and post-trade analytics storage. This can cut OPEX costs helping to reduce management chores, power and cooling needs, and space requirements in a data center.
As noted above, modeling and analysis based on larger, more granular datasets delivers improved research and post-trade results. But accommodating such large volumes of data in cache to improve performance is not practical.
DDN offers a different approach. Its storage solutions offer native support for InfiniBand and use flash to essentially accelerate I/O and throughput. In this way, its systems can pump data to the processors faster, improving performance over that offered in traditional storage solutions.
The benefit of this technology approach has been quantified in benchmark tests. In public STAC-M3 benchmarking tests using kdb (a system for managing large volumes of real-time and historical time series data), a DDN SFA 10K-X system reduced latency by 350 percent compared to traditional storage solutions. But even more important to post-market analytics, the solution delivered results 750 percent faster than traditional storage solutions.
This added performance offers competitive and financial advantages. Accelerating computational workflows speeds results helping organizations make faster, more intelligent decisions. And it lets them analyze more strategies and positions with fewer servers and licenses.
Simply put, DDN SFA solutions are optimized for financial services research and post-trade analytics environments.
For more information about DDN solutions for financial services analytics, visit:http://www.ddn.com/applications/financial-services