Supercomputers for Finance: A New Challenge

By Julian Fielden

February 29, 2008

According to analyst group IDC, revenue for the overall supercomputing market grew a full 18 percent compared to the same period last year, to reach $3.0 billion in the third quarter of 2007.

Growth of this scale is the result of several factors:

  • In the university market, the traditional home of the supercomputer, growth is driven by increasing budgets swelled by European coffers, investment and knowledge partnerships with private firms and larger aggregated UK university and college department budgets.
  • In mainstream markets, growth is driven by a need for competitive advantage, efficiency and productivity gains and to some extent, Microsoft’s recently launched Compute Cluster Server broadening the market and encouraging widespread adoption of supercomputing technologies.

By any measure, finance is one of the strongest growth sectors for supercomputers, driven by ever increasing data volumes, greater data complexity and significantly more challenging data analysis — a key outcome from greater competition and regulation (through BASEL I, BASEL II, and Sarbanes-Oxley for example).

Financial organisations use the power and performance of supercomputers in a variety of ways:

  • Portfolio optimisation – to run models and optimise thousands of individual portfolios overnight based on the previous day’s trading results.
  • Valuation of financial derivatives — A re-insurance firm, for example, may need to value and compute hedge strategies for hundreds of thousands of policy holders in its portfolio.
  • Detection of credit card fraud – supercomputers enable a bank to easily run more fraud detection algorithms against tens of millions of credit card accounts.
  • Hedge fund trading — supercomputers allow for faster reaction time to market conditions, enabling analysts to evaluate more sophisticated algorithms that take into account larger data sets.

Today, financial organisations are aiming to increase the computational power and performance at their disposal in one of three ways.

Some financial organisations will work directly with universities and colleges to draw on the power and performance of their academic supercomputers.

For example, a financial analysis firm called CD02 has recently teamed-up with The Department of Computing at Surrey University on a three-year bid to look at ways to develop better pricing and risk analysis technology, which will ultimately help banks, hedge funds and investment outfits to trade in a financial instrument called a collateralised debt obligation, or CDO. This project, sponsored by the former DTI, centres on the power of the supercomputer to model huge problem spaces and simulations to explore very complex risk analysis — but is just one of the things the cluster will be used for.

Alternatively, for other financial organisations the supercomputer requirement will be met by new implementations — within their own organisations — using the very latest technology providing maximum power and performance. It generally depends on the types of applications running, but many financial organisations are turning to cluster-based supercomputers using low cost blade servers.

Lastly, for “early supercomputer adopters,” an increase in computational power and performance will not come from new implementations or working in partnership with universities and colleges, but from driving greater efficiencies from existing third and fourth generation supercomputer implementations.

All approaches will demand greater storage capacity and instant storage scalability to keep pace with data generation and ensure storage does not become an innovation bottleneck.

In fact, IDC predicts in 2008 the average worldwide growth for the supercomputing storage market will actually be higher than for servers, about 11 percent. IDC expects the market for HPC storage to reach about $4.8 billion in 2008 (2006: $3.8 billion).

Working directly with universities and colleges aside, implementing a new supercomputer or aiming gain efficiencies from existing supercomputer implementations, is not without problems.

Super Problems

IT managers responsible for supercomputer implementations already operational in financial organisations are often faced with piece-meal implementations created by generations of predecessors each adding their own features and functionality to the overall supercomputer. This can include a mix of hardware, a plethora of in-house developed code running on old operating systems and a selection of proprietary applications and other software.

IT managers are also being held back by the management and structure of their data centres – as it becomes clear that there is not a limitless supply of energy, space and budgets to run an efficient data centre. For example, plugging yet another server into the datacentre is no longer an option for companies operating in Canary Wharf, where companies are facing major difficulties securing extra power.

Although some companies are currently unaffected by power shortages, the cost of powering and cooling a data centre is becoming a pressing concern. A study published by consultancy BroadGroup found that the average energy bill to run a corporate data centre in the UK is about £5.3m per year and will double to £11m over five years.

Data centres capable of hosting a supercomputer must also be able to take heat away from the site – increasingly fewer and fewer buildings can cope. And, most importantly, there is simply no space available in London — our financial capital — for expanding data centres. While there are old factories elsewhere in the country, there are limitations on their use for new facilities.

From a storage perspective, as Moore’s law continues to ramp up processor speed, the storage bottleneck is becoming more pronounced to the end users. In many cases, IT managers have been burdened with conventional storage technologies that require customisation before they can be effectively applied in supercomputing environments. And, even then, it does not mean the technology is fully capable of meeting supercomputing performance, accessibility, capacity and availability requirements.

Finally, as data centre managers look to provide an IT infrastructure that can cost-effectively scale to meet rapidly changing business requirements, they are also evaluating switching and interconnect technology and realising, in many cases, whilst servers and storage may be suitable, the data transport mechanism between each is lagging behind.

Solution

For financial organisations either undertaking a new implementation or aiming to optimise an existing implementation, it is essential to take some precautionary steps:

1. Prepare and plan properly

Financial organisations must analyse and build a plan that considers current and future needs of the supercomputer in terms of power, cooling, space, effects on the environment, costs and management.

Naturally, it is equally important to consider user requirements for the supercomputer, including the need for future scalability and upgrades.

2. Select a qualified integrator

Financial organisations should look for a specialist supercomputer integrator, one that demonstrates grade one credentials in the delivery of a supercomputing project. The following factors should always be taken into consideration:

Understand the customer. Supercomputer integrators must demonstrate a thorough understanding of their customers’ markets. Integrators must be able to understand what customers are trying to achieve, why their research or project is important and why they are trying to do it in that way.

Demonstrate history. As budgets grow and aggregate, customers are more wary and cautious of investment. Customers are looking for integrators with experience, a history and proven track record in delivering supercomputer solutions.

Hardware vendor relationship. In many instances, supercomputer solutions built for customers are firsts, fastest, largest and often unique. Some integrators will ‘play one hardware vendor off against the other’, or propose solutions based on hardware outside of the traditional Tier 1 manufacturers. However, when you’re working on the leading edge, problems can occur so it is essential for customers to know that integrators have a close and long-term relationship with the primary hardware supplier.

The HPC ecosystem. A single IT vendor cannot always supply the whole supercomputer solution – server, storage, interconnect, operating system, applications, etc. Customers therefore need a well connected integrator that can call on existing technology partner relationships to enhance solutions from the primary hardware vendor.

Technology innovation. Customers look to integrators to provide solutions based on the best technology available. Integrators must therefore react to technology innovation quickly.

Protect the environment. Environmentally conscious customers will be looking for integrators that can meet not just their computing needs, but also their green needs and this means designing more complex solutions; for example using larger numbers of lower powered processors or vastly improved efficiency using virtualisation technology.

3. Make best use of technology

For any financial organisation looking to gain maximum power and performance from a new supercomputer server implementation (cost effectively) they should avoid proprietary hardware, which a manufacturer might choose to drop.

Servers

For server performance, it is also common sense for financial organisations to purchase blade server technology, which can use up to 50 percent less floor space in a data centre and up to 58 percent less energy than traditional servers.

For existing implementations, financial organisations should look for software, such as products from vendor Cluster Resources, that will help drive efficiencies and performance from existing cluster operations.

Software

The software can take full responsibility for scheduling, managing, monitoring and reporting of cluster workloads, maximising job throughput.

Storage

New supercomputers have a pick of storage technologies: RAID (Redundant Array of Inexpensive Disks), SAN (Storage Area Network), NAS (Network Attached Storage), HSM (Hierarchical Storage Management), tape libraries and silos.

Perhaps more important than the choice of storage hardware, financial organisations should introduce a scalable, high performance file system with Information Lifecycle Management (ILM) capabilities, such as IBM’s General Parallel File System (GPFS), to keep up with the demands of real time data processing.

GPFS enables additional storage capacity and performance to be added and operational in minutes with no interruption to users or applications, scaling to multiple petabytes with hundreds of gigabytes per second performance. Once the data has been processed, it can be seamlessly relocated to lower cost storage for archiving, providing the financial organisations with a single easy-to-manage pool of resources.

Interconnect

IT managers should also carefully consider switching and interconnect technology. Currently, an interconnect battle rages between InfiniBand and 10 GigE. IDC expects the use of both of these high-speed interconnect technologies to grow.

A pervasive, low-latency, high-bandwidth interconnect, Infiniband is backed by a steering committee made up of the world’s leading IT vendors, and is expected to win the battle.

Management

There are not many IT departments that have in-depth knowledge and experience of supercomputer systems. Without assistance from external suppliers it can take considerably longer to get equipment up and running, upgrades complete, software in place and systems configured to drive maximum power and performance.

Taking into consideration commercial confidentiality and data security, financial organisations should consider Cluster Management and Support Services — outsourced support operations — which enable financial organisations to focus all available IT department resources on non-cluster related queries and user problems.

Conclusion

Whether your financial organisation is aiming for a new implementation or to drive efficiency from an existing implementation, whether you have legacy problems or a green field site, one thing is certain: financial organisations can use the power and performance of a supercomputer to help deliver the most accurate, comprehensive and actionable intelligence, providing that all important competitive advantage.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire