Supercomputers for Finance: A New Challenge

By Julian Fielden

February 29, 2008

According to analyst group IDC, revenue for the overall supercomputing market grew a full 18 percent compared to the same period last year, to reach $3.0 billion in the third quarter of 2007.

Growth of this scale is the result of several factors:

  • In the university market, the traditional home of the supercomputer, growth is driven by increasing budgets swelled by European coffers, investment and knowledge partnerships with private firms and larger aggregated UK university and college department budgets.
  • In mainstream markets, growth is driven by a need for competitive advantage, efficiency and productivity gains and to some extent, Microsoft’s recently launched Compute Cluster Server broadening the market and encouraging widespread adoption of supercomputing technologies.

By any measure, finance is one of the strongest growth sectors for supercomputers, driven by ever increasing data volumes, greater data complexity and significantly more challenging data analysis — a key outcome from greater competition and regulation (through BASEL I, BASEL II, and Sarbanes-Oxley for example).

Financial organisations use the power and performance of supercomputers in a variety of ways:

  • Portfolio optimisation – to run models and optimise thousands of individual portfolios overnight based on the previous day’s trading results.
  • Valuation of financial derivatives — A re-insurance firm, for example, may need to value and compute hedge strategies for hundreds of thousands of policy holders in its portfolio.
  • Detection of credit card fraud – supercomputers enable a bank to easily run more fraud detection algorithms against tens of millions of credit card accounts.
  • Hedge fund trading — supercomputers allow for faster reaction time to market conditions, enabling analysts to evaluate more sophisticated algorithms that take into account larger data sets.

Today, financial organisations are aiming to increase the computational power and performance at their disposal in one of three ways.

Some financial organisations will work directly with universities and colleges to draw on the power and performance of their academic supercomputers.

For example, a financial analysis firm called CD02 has recently teamed-up with The Department of Computing at Surrey University on a three-year bid to look at ways to develop better pricing and risk analysis technology, which will ultimately help banks, hedge funds and investment outfits to trade in a financial instrument called a collateralised debt obligation, or CDO. This project, sponsored by the former DTI, centres on the power of the supercomputer to model huge problem spaces and simulations to explore very complex risk analysis — but is just one of the things the cluster will be used for.

Alternatively, for other financial organisations the supercomputer requirement will be met by new implementations — within their own organisations — using the very latest technology providing maximum power and performance. It generally depends on the types of applications running, but many financial organisations are turning to cluster-based supercomputers using low cost blade servers.

Lastly, for “early supercomputer adopters,” an increase in computational power and performance will not come from new implementations or working in partnership with universities and colleges, but from driving greater efficiencies from existing third and fourth generation supercomputer implementations.

All approaches will demand greater storage capacity and instant storage scalability to keep pace with data generation and ensure storage does not become an innovation bottleneck.

In fact, IDC predicts in 2008 the average worldwide growth for the supercomputing storage market will actually be higher than for servers, about 11 percent. IDC expects the market for HPC storage to reach about $4.8 billion in 2008 (2006: $3.8 billion).

Working directly with universities and colleges aside, implementing a new supercomputer or aiming gain efficiencies from existing supercomputer implementations, is not without problems.

Super Problems

IT managers responsible for supercomputer implementations already operational in financial organisations are often faced with piece-meal implementations created by generations of predecessors each adding their own features and functionality to the overall supercomputer. This can include a mix of hardware, a plethora of in-house developed code running on old operating systems and a selection of proprietary applications and other software.

IT managers are also being held back by the management and structure of their data centres – as it becomes clear that there is not a limitless supply of energy, space and budgets to run an efficient data centre. For example, plugging yet another server into the datacentre is no longer an option for companies operating in Canary Wharf, where companies are facing major difficulties securing extra power.

Although some companies are currently unaffected by power shortages, the cost of powering and cooling a data centre is becoming a pressing concern. A study published by consultancy BroadGroup found that the average energy bill to run a corporate data centre in the UK is about £5.3m per year and will double to £11m over five years.

Data centres capable of hosting a supercomputer must also be able to take heat away from the site – increasingly fewer and fewer buildings can cope. And, most importantly, there is simply no space available in London — our financial capital — for expanding data centres. While there are old factories elsewhere in the country, there are limitations on their use for new facilities.

From a storage perspective, as Moore’s law continues to ramp up processor speed, the storage bottleneck is becoming more pronounced to the end users. In many cases, IT managers have been burdened with conventional storage technologies that require customisation before they can be effectively applied in supercomputing environments. And, even then, it does not mean the technology is fully capable of meeting supercomputing performance, accessibility, capacity and availability requirements.

Finally, as data centre managers look to provide an IT infrastructure that can cost-effectively scale to meet rapidly changing business requirements, they are also evaluating switching and interconnect technology and realising, in many cases, whilst servers and storage may be suitable, the data transport mechanism between each is lagging behind.

Solution

For financial organisations either undertaking a new implementation or aiming to optimise an existing implementation, it is essential to take some precautionary steps:

1. Prepare and plan properly

Financial organisations must analyse and build a plan that considers current and future needs of the supercomputer in terms of power, cooling, space, effects on the environment, costs and management.

Naturally, it is equally important to consider user requirements for the supercomputer, including the need for future scalability and upgrades.

2. Select a qualified integrator

Financial organisations should look for a specialist supercomputer integrator, one that demonstrates grade one credentials in the delivery of a supercomputing project. The following factors should always be taken into consideration:

Understand the customer. Supercomputer integrators must demonstrate a thorough understanding of their customers’ markets. Integrators must be able to understand what customers are trying to achieve, why their research or project is important and why they are trying to do it in that way.

Demonstrate history. As budgets grow and aggregate, customers are more wary and cautious of investment. Customers are looking for integrators with experience, a history and proven track record in delivering supercomputer solutions.

Hardware vendor relationship. In many instances, supercomputer solutions built for customers are firsts, fastest, largest and often unique. Some integrators will ‘play one hardware vendor off against the other’, or propose solutions based on hardware outside of the traditional Tier 1 manufacturers. However, when you’re working on the leading edge, problems can occur so it is essential for customers to know that integrators have a close and long-term relationship with the primary hardware supplier.

The HPC ecosystem. A single IT vendor cannot always supply the whole supercomputer solution – server, storage, interconnect, operating system, applications, etc. Customers therefore need a well connected integrator that can call on existing technology partner relationships to enhance solutions from the primary hardware vendor.

Technology innovation. Customers look to integrators to provide solutions based on the best technology available. Integrators must therefore react to technology innovation quickly.

Protect the environment. Environmentally conscious customers will be looking for integrators that can meet not just their computing needs, but also their green needs and this means designing more complex solutions; for example using larger numbers of lower powered processors or vastly improved efficiency using virtualisation technology.

3. Make best use of technology

For any financial organisation looking to gain maximum power and performance from a new supercomputer server implementation (cost effectively) they should avoid proprietary hardware, which a manufacturer might choose to drop.

Servers

For server performance, it is also common sense for financial organisations to purchase blade server technology, which can use up to 50 percent less floor space in a data centre and up to 58 percent less energy than traditional servers.

For existing implementations, financial organisations should look for software, such as products from vendor Cluster Resources, that will help drive efficiencies and performance from existing cluster operations.

Software

The software can take full responsibility for scheduling, managing, monitoring and reporting of cluster workloads, maximising job throughput.

Storage

New supercomputers have a pick of storage technologies: RAID (Redundant Array of Inexpensive Disks), SAN (Storage Area Network), NAS (Network Attached Storage), HSM (Hierarchical Storage Management), tape libraries and silos.

Perhaps more important than the choice of storage hardware, financial organisations should introduce a scalable, high performance file system with Information Lifecycle Management (ILM) capabilities, such as IBM’s General Parallel File System (GPFS), to keep up with the demands of real time data processing.

GPFS enables additional storage capacity and performance to be added and operational in minutes with no interruption to users or applications, scaling to multiple petabytes with hundreds of gigabytes per second performance. Once the data has been processed, it can be seamlessly relocated to lower cost storage for archiving, providing the financial organisations with a single easy-to-manage pool of resources.

Interconnect

IT managers should also carefully consider switching and interconnect technology. Currently, an interconnect battle rages between InfiniBand and 10 GigE. IDC expects the use of both of these high-speed interconnect technologies to grow.

A pervasive, low-latency, high-bandwidth interconnect, Infiniband is backed by a steering committee made up of the world’s leading IT vendors, and is expected to win the battle.

Management

There are not many IT departments that have in-depth knowledge and experience of supercomputer systems. Without assistance from external suppliers it can take considerably longer to get equipment up and running, upgrades complete, software in place and systems configured to drive maximum power and performance.

Taking into consideration commercial confidentiality and data security, financial organisations should consider Cluster Management and Support Services — outsourced support operations — which enable financial organisations to focus all available IT department resources on non-cluster related queries and user problems.

Conclusion

Whether your financial organisation is aiming for a new implementation or to drive efficiency from an existing implementation, whether you have legacy problems or a green field site, one thing is certain: financial organisations can use the power and performance of a supercomputer to help deliver the most accurate, comprehensive and actionable intelligence, providing that all important competitive advantage.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire