Supercomputers for Finance: A New Challenge

By Julian Fielden

February 29, 2008

According to analyst group IDC, revenue for the overall supercomputing market grew a full 18 percent compared to the same period last year, to reach $3.0 billion in the third quarter of 2007.

Growth of this scale is the result of several factors:

  • In the university market, the traditional home of the supercomputer, growth is driven by increasing budgets swelled by European coffers, investment and knowledge partnerships with private firms and larger aggregated UK university and college department budgets.
  • In mainstream markets, growth is driven by a need for competitive advantage, efficiency and productivity gains and to some extent, Microsoft’s recently launched Compute Cluster Server broadening the market and encouraging widespread adoption of supercomputing technologies.

By any measure, finance is one of the strongest growth sectors for supercomputers, driven by ever increasing data volumes, greater data complexity and significantly more challenging data analysis — a key outcome from greater competition and regulation (through BASEL I, BASEL II, and Sarbanes-Oxley for example).

Financial organisations use the power and performance of supercomputers in a variety of ways:

  • Portfolio optimisation – to run models and optimise thousands of individual portfolios overnight based on the previous day’s trading results.
  • Valuation of financial derivatives — A re-insurance firm, for example, may need to value and compute hedge strategies for hundreds of thousands of policy holders in its portfolio.
  • Detection of credit card fraud – supercomputers enable a bank to easily run more fraud detection algorithms against tens of millions of credit card accounts.
  • Hedge fund trading — supercomputers allow for faster reaction time to market conditions, enabling analysts to evaluate more sophisticated algorithms that take into account larger data sets.

Today, financial organisations are aiming to increase the computational power and performance at their disposal in one of three ways.

Some financial organisations will work directly with universities and colleges to draw on the power and performance of their academic supercomputers.

For example, a financial analysis firm called CD02 has recently teamed-up with The Department of Computing at Surrey University on a three-year bid to look at ways to develop better pricing and risk analysis technology, which will ultimately help banks, hedge funds and investment outfits to trade in a financial instrument called a collateralised debt obligation, or CDO. This project, sponsored by the former DTI, centres on the power of the supercomputer to model huge problem spaces and simulations to explore very complex risk analysis — but is just one of the things the cluster will be used for.

Alternatively, for other financial organisations the supercomputer requirement will be met by new implementations — within their own organisations — using the very latest technology providing maximum power and performance. It generally depends on the types of applications running, but many financial organisations are turning to cluster-based supercomputers using low cost blade servers.

Lastly, for “early supercomputer adopters,” an increase in computational power and performance will not come from new implementations or working in partnership with universities and colleges, but from driving greater efficiencies from existing third and fourth generation supercomputer implementations.

All approaches will demand greater storage capacity and instant storage scalability to keep pace with data generation and ensure storage does not become an innovation bottleneck.

In fact, IDC predicts in 2008 the average worldwide growth for the supercomputing storage market will actually be higher than for servers, about 11 percent. IDC expects the market for HPC storage to reach about $4.8 billion in 2008 (2006: $3.8 billion).

Working directly with universities and colleges aside, implementing a new supercomputer or aiming gain efficiencies from existing supercomputer implementations, is not without problems.

Super Problems

IT managers responsible for supercomputer implementations already operational in financial organisations are often faced with piece-meal implementations created by generations of predecessors each adding their own features and functionality to the overall supercomputer. This can include a mix of hardware, a plethora of in-house developed code running on old operating systems and a selection of proprietary applications and other software.

IT managers are also being held back by the management and structure of their data centres – as it becomes clear that there is not a limitless supply of energy, space and budgets to run an efficient data centre. For example, plugging yet another server into the datacentre is no longer an option for companies operating in Canary Wharf, where companies are facing major difficulties securing extra power.

Although some companies are currently unaffected by power shortages, the cost of powering and cooling a data centre is becoming a pressing concern. A study published by consultancy BroadGroup found that the average energy bill to run a corporate data centre in the UK is about £5.3m per year and will double to £11m over five years.

Data centres capable of hosting a supercomputer must also be able to take heat away from the site – increasingly fewer and fewer buildings can cope. And, most importantly, there is simply no space available in London — our financial capital — for expanding data centres. While there are old factories elsewhere in the country, there are limitations on their use for new facilities.

From a storage perspective, as Moore’s law continues to ramp up processor speed, the storage bottleneck is becoming more pronounced to the end users. In many cases, IT managers have been burdened with conventional storage technologies that require customisation before they can be effectively applied in supercomputing environments. And, even then, it does not mean the technology is fully capable of meeting supercomputing performance, accessibility, capacity and availability requirements.

Finally, as data centre managers look to provide an IT infrastructure that can cost-effectively scale to meet rapidly changing business requirements, they are also evaluating switching and interconnect technology and realising, in many cases, whilst servers and storage may be suitable, the data transport mechanism between each is lagging behind.

Solution

For financial organisations either undertaking a new implementation or aiming to optimise an existing implementation, it is essential to take some precautionary steps:

1. Prepare and plan properly

Financial organisations must analyse and build a plan that considers current and future needs of the supercomputer in terms of power, cooling, space, effects on the environment, costs and management.

Naturally, it is equally important to consider user requirements for the supercomputer, including the need for future scalability and upgrades.

2. Select a qualified integrator

Financial organisations should look for a specialist supercomputer integrator, one that demonstrates grade one credentials in the delivery of a supercomputing project. The following factors should always be taken into consideration:

Understand the customer. Supercomputer integrators must demonstrate a thorough understanding of their customers’ markets. Integrators must be able to understand what customers are trying to achieve, why their research or project is important and why they are trying to do it in that way.

Demonstrate history. As budgets grow and aggregate, customers are more wary and cautious of investment. Customers are looking for integrators with experience, a history and proven track record in delivering supercomputer solutions.

Hardware vendor relationship. In many instances, supercomputer solutions built for customers are firsts, fastest, largest and often unique. Some integrators will ‘play one hardware vendor off against the other’, or propose solutions based on hardware outside of the traditional Tier 1 manufacturers. However, when you’re working on the leading edge, problems can occur so it is essential for customers to know that integrators have a close and long-term relationship with the primary hardware supplier.

The HPC ecosystem. A single IT vendor cannot always supply the whole supercomputer solution – server, storage, interconnect, operating system, applications, etc. Customers therefore need a well connected integrator that can call on existing technology partner relationships to enhance solutions from the primary hardware vendor.

Technology innovation. Customers look to integrators to provide solutions based on the best technology available. Integrators must therefore react to technology innovation quickly.

Protect the environment. Environmentally conscious customers will be looking for integrators that can meet not just their computing needs, but also their green needs and this means designing more complex solutions; for example using larger numbers of lower powered processors or vastly improved efficiency using virtualisation technology.

3. Make best use of technology

For any financial organisation looking to gain maximum power and performance from a new supercomputer server implementation (cost effectively) they should avoid proprietary hardware, which a manufacturer might choose to drop.

Servers

For server performance, it is also common sense for financial organisations to purchase blade server technology, which can use up to 50 percent less floor space in a data centre and up to 58 percent less energy than traditional servers.

For existing implementations, financial organisations should look for software, such as products from vendor Cluster Resources, that will help drive efficiencies and performance from existing cluster operations.

Software

The software can take full responsibility for scheduling, managing, monitoring and reporting of cluster workloads, maximising job throughput.

Storage

New supercomputers have a pick of storage technologies: RAID (Redundant Array of Inexpensive Disks), SAN (Storage Area Network), NAS (Network Attached Storage), HSM (Hierarchical Storage Management), tape libraries and silos.

Perhaps more important than the choice of storage hardware, financial organisations should introduce a scalable, high performance file system with Information Lifecycle Management (ILM) capabilities, such as IBM’s General Parallel File System (GPFS), to keep up with the demands of real time data processing.

GPFS enables additional storage capacity and performance to be added and operational in minutes with no interruption to users or applications, scaling to multiple petabytes with hundreds of gigabytes per second performance. Once the data has been processed, it can be seamlessly relocated to lower cost storage for archiving, providing the financial organisations with a single easy-to-manage pool of resources.

Interconnect

IT managers should also carefully consider switching and interconnect technology. Currently, an interconnect battle rages between InfiniBand and 10 GigE. IDC expects the use of both of these high-speed interconnect technologies to grow.

A pervasive, low-latency, high-bandwidth interconnect, Infiniband is backed by a steering committee made up of the world’s leading IT vendors, and is expected to win the battle.

Management

There are not many IT departments that have in-depth knowledge and experience of supercomputer systems. Without assistance from external suppliers it can take considerably longer to get equipment up and running, upgrades complete, software in place and systems configured to drive maximum power and performance.

Taking into consideration commercial confidentiality and data security, financial organisations should consider Cluster Management and Support Services — outsourced support operations — which enable financial organisations to focus all available IT department resources on non-cluster related queries and user problems.

Conclusion

Whether your financial organisation is aiming for a new implementation or to drive efficiency from an existing implementation, whether you have legacy problems or a green field site, one thing is certain: financial organisations can use the power and performance of a supercomputer to help deliver the most accurate, comprehensive and actionable intelligence, providing that all important competitive advantage.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia Showcases Work with Quantum Centers at ISC24

May 13, 2024

With quantum computing surging in Europe, Nvidia took advantage of ISC24 to showcase its efforts working with quantum development centers. Currently, Nvidia GPUs are dominant inside classical systems used for quantum sim Read more…

ISC24: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger systems (e.g. exascale), according to Hyperion Research’s ann Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Oak Ridge National Laboratory in Tennessee, USA, retains its Read more…

Harvard/Google Use AI to Help Produce Astonishing 3D Map of Brain Tissue

May 10, 2024

Although LLMs are getting all the notice lately, AI techniques of many varieties are being infused throughout science. For example, Harvard researchers, Google, and colleagues published a 3D map in Science this week that Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of that at the upcoming ISC High Performance 2024, which is hap Read more…

Processor Security: Taking the Wong Path

May 9, 2024

More research at UC San Diego revealed yet another side-channel attack on x86_64 processors. The research identified a new vulnerability that allows precise control of conditional branch prediction in modern processors.� Read more…

ISC24: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. Accordin Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Leading Solution Providers

Contributors

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

A Big Memory Nvidia GH200 Next to Your Desk: Closer Than You Think

February 22, 2024

Students of the microprocessor may recall that the original 8086/8088 processors did not have floating point units. The motherboard often had an extra socket fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire