According to analyst group IDC, revenue for the overall supercomputing market grew a full 18 percent compared to the same period last year, to reach $3.0 billion in the third quarter of 2007.
Growth of this scale is the result of several factors:
- In the university market, the traditional home of the supercomputer, growth is driven by increasing budgets swelled by European coffers, investment and knowledge partnerships with private firms and larger aggregated UK university and college department budgets.
- In mainstream markets, growth is driven by a need for competitive advantage, efficiency and productivity gains and to some extent, Microsoft’s recently launched Compute Cluster Server broadening the market and encouraging widespread adoption of supercomputing technologies.
By any measure, finance is one of the strongest growth sectors for supercomputers, driven by ever increasing data volumes, greater data complexity and significantly more challenging data analysis — a key outcome from greater competition and regulation (through BASEL I, BASEL II, and Sarbanes-Oxley for example).
Financial organisations use the power and performance of supercomputers in a variety of ways:
- Portfolio optimisation – to run models and optimise thousands of individual portfolios overnight based on the previous day’s trading results.
- Valuation of financial derivatives — A re-insurance firm, for example, may need to value and compute hedge strategies for hundreds of thousands of policy holders in its portfolio.
- Detection of credit card fraud – supercomputers enable a bank to easily run more fraud detection algorithms against tens of millions of credit card accounts.
- Hedge fund trading — supercomputers allow for faster reaction time to market conditions, enabling analysts to evaluate more sophisticated algorithms that take into account larger data sets.
Today, financial organisations are aiming to increase the computational power and performance at their disposal in one of three ways.
Some financial organisations will work directly with universities and colleges to draw on the power and performance of their academic supercomputers.
For example, a financial analysis firm called CD02 has recently teamed-up with The Department of Computing at Surrey University on a three-year bid to look at ways to develop better pricing and risk analysis technology, which will ultimately help banks, hedge funds and investment outfits to trade in a financial instrument called a collateralised debt obligation, or CDO. This project, sponsored by the former DTI, centres on the power of the supercomputer to model huge problem spaces and simulations to explore very complex risk analysis — but is just one of the things the cluster will be used for.
Alternatively, for other financial organisations the supercomputer requirement will be met by new implementations — within their own organisations — using the very latest technology providing maximum power and performance. It generally depends on the types of applications running, but many financial organisations are turning to cluster-based supercomputers using low cost blade servers.
Lastly, for “early supercomputer adopters,” an increase in computational power and performance will not come from new implementations or working in partnership with universities and colleges, but from driving greater efficiencies from existing third and fourth generation supercomputer implementations.
All approaches will demand greater storage capacity and instant storage scalability to keep pace with data generation and ensure storage does not become an innovation bottleneck.
In fact, IDC predicts in 2008 the average worldwide growth for the supercomputing storage market will actually be higher than for servers, about 11 percent. IDC expects the market for HPC storage to reach about $4.8 billion in 2008 (2006: $3.8 billion).
Working directly with universities and colleges aside, implementing a new supercomputer or aiming gain efficiencies from existing supercomputer implementations, is not without problems.
IT managers responsible for supercomputer implementations already operational in financial organisations are often faced with piece-meal implementations created by generations of predecessors each adding their own features and functionality to the overall supercomputer. This can include a mix of hardware, a plethora of in-house developed code running on old operating systems and a selection of proprietary applications and other software.
IT managers are also being held back by the management and structure of their data centres – as it becomes clear that there is not a limitless supply of energy, space and budgets to run an efficient data centre. For example, plugging yet another server into the datacentre is no longer an option for companies operating in Canary Wharf, where companies are facing major difficulties securing extra power.
Although some companies are currently unaffected by power shortages, the cost of powering and cooling a data centre is becoming a pressing concern. A study published by consultancy BroadGroup found that the average energy bill to run a corporate data centre in the UK is about £5.3m per year and will double to £11m over five years.
Data centres capable of hosting a supercomputer must also be able to take heat away from the site – increasingly fewer and fewer buildings can cope. And, most importantly, there is simply no space available in London — our financial capital — for expanding data centres. While there are old factories elsewhere in the country, there are limitations on their use for new facilities.
From a storage perspective, as Moore’s law continues to ramp up processor speed, the storage bottleneck is becoming more pronounced to the end users. In many cases, IT managers have been burdened with conventional storage technologies that require customisation before they can be effectively applied in supercomputing environments. And, even then, it does not mean the technology is fully capable of meeting supercomputing performance, accessibility, capacity and availability requirements.
Finally, as data centre managers look to provide an IT infrastructure that can cost-effectively scale to meet rapidly changing business requirements, they are also evaluating switching and interconnect technology and realising, in many cases, whilst servers and storage may be suitable, the data transport mechanism between each is lagging behind.
For financial organisations either undertaking a new implementation or aiming to optimise an existing implementation, it is essential to take some precautionary steps:
1. Prepare and plan properly
Financial organisations must analyse and build a plan that considers current and future needs of the supercomputer in terms of power, cooling, space, effects on the environment, costs and management.
Naturally, it is equally important to consider user requirements for the supercomputer, including the need for future scalability and upgrades.
2. Select a qualified integrator
Financial organisations should look for a specialist supercomputer integrator, one that demonstrates grade one credentials in the delivery of a supercomputing project. The following factors should always be taken into consideration:
Understand the customer. Supercomputer integrators must demonstrate a thorough understanding of their customers’ markets. Integrators must be able to understand what customers are trying to achieve, why their research or project is important and why they are trying to do it in that way.
Demonstrate history. As budgets grow and aggregate, customers are more wary and cautious of investment. Customers are looking for integrators with experience, a history and proven track record in delivering supercomputer solutions.
Hardware vendor relationship. In many instances, supercomputer solutions built for customers are firsts, fastest, largest and often unique. Some integrators will ‘play one hardware vendor off against the other’, or propose solutions based on hardware outside of the traditional Tier 1 manufacturers. However, when you’re working on the leading edge, problems can occur so it is essential for customers to know that integrators have a close and long-term relationship with the primary hardware supplier.
The HPC ecosystem. A single IT vendor cannot always supply the whole supercomputer solution – server, storage, interconnect, operating system, applications, etc. Customers therefore need a well connected integrator that can call on existing technology partner relationships to enhance solutions from the primary hardware vendor.
Technology innovation. Customers look to integrators to provide solutions based on the best technology available. Integrators must therefore react to technology innovation quickly.
Protect the environment. Environmentally conscious customers will be looking for integrators that can meet not just their computing needs, but also their green needs and this means designing more complex solutions; for example using larger numbers of lower powered processors or vastly improved efficiency using virtualisation technology.
3. Make best use of technology
For any financial organisation looking to gain maximum power and performance from a new supercomputer server implementation (cost effectively) they should avoid proprietary hardware, which a manufacturer might choose to drop.
For server performance, it is also common sense for financial organisations to purchase blade server technology, which can use up to 50 percent less floor space in a data centre and up to 58 percent less energy than traditional servers.
For existing implementations, financial organisations should look for software, such as products from vendor Cluster Resources, that will help drive efficiencies and performance from existing cluster operations.
The software can take full responsibility for scheduling, managing, monitoring and reporting of cluster workloads, maximising job throughput.
New supercomputers have a pick of storage technologies: RAID (Redundant Array of Inexpensive Disks), SAN (Storage Area Network), NAS (Network Attached Storage), HSM (Hierarchical Storage Management), tape libraries and silos.
Perhaps more important than the choice of storage hardware, financial organisations should introduce a scalable, high performance file system with Information Lifecycle Management (ILM) capabilities, such as IBM’s General Parallel File System (GPFS), to keep up with the demands of real time data processing.
GPFS enables additional storage capacity and performance to be added and operational in minutes with no interruption to users or applications, scaling to multiple petabytes with hundreds of gigabytes per second performance. Once the data has been processed, it can be seamlessly relocated to lower cost storage for archiving, providing the financial organisations with a single easy-to-manage pool of resources.
IT managers should also carefully consider switching and interconnect technology. Currently, an interconnect battle rages between InfiniBand and 10 GigE. IDC expects the use of both of these high-speed interconnect technologies to grow.
A pervasive, low-latency, high-bandwidth interconnect, Infiniband is backed by a steering committee made up of the world’s leading IT vendors, and is expected to win the battle.
There are not many IT departments that have in-depth knowledge and experience of supercomputer systems. Without assistance from external suppliers it can take considerably longer to get equipment up and running, upgrades complete, software in place and systems configured to drive maximum power and performance.
Taking into consideration commercial confidentiality and data security, financial organisations should consider Cluster Management and Support Services — outsourced support operations — which enable financial organisations to focus all available IT department resources on non-cluster related queries and user problems.
Whether your financial organisation is aiming for a new implementation or to drive efficiency from an existing implementation, whether you have legacy problems or a green field site, one thing is certain: financial organisations can use the power and performance of a supercomputer to help deliver the most accurate, comprehensive and actionable intelligence, providing that all important competitive advantage.