A more standardised HPC platform approach is making the running of HPC projects within increasing financial reach. But this still leaves the dilemma of how organizations can cost-justify building dedicated datacenter facilities for supporting such platforms that may become surplus to requirement in just a year or two.
The obvious alternative is to turn to a colocation datacenter provider. However, in the UK and in many areas of Europe, the reality may not be quite so simple, as fit-for-purpose colocation facilities tend to be few and far between. HPC users will find it challenging to find colocation providers capable of meeting their specific and increasingly IoT-driven big data processing and analytical demands, especially when it comes to the powering and cooling of highly-dense and complex platforms.
Is the only answer therefore to either accept the risk of going colo, or continue building expensive in-house datacenters?
Perhaps for some, particularly the not-for-profit science research sector, a best of both worlds alternative is already available whereby HPC resources are shared. Certainly in the UK and some other countries in Europe such government backed solutions are on offer. For example, only last month it was announced that six UK universities are each to host HPC centers with $25 million of funding from the Engineering and Physical Sciences Research Council, a UK government body. This is to bridge the gap between the computing capabilities currently available to researchers in many UK universities and the state-of-the-art HPC resources accessible via the UK National Supercomputing Service (ARCHER).
But for many commercial organizations, be they in financial services, manufacturing, retail, oil & gas, pharmaceuticals and so on, the hard choice remains whether to self-build or buy space in colo datacenters. For those where self-build has been ruled out for reasons of sheer capital expense and where HPC project timescales are just too short to warrant a dedicated facility, colocation seems inevitable. This then presents a further dilemma. There are many colos to choose from but the majority have insufficient power and cooling for HPC densities and inadequate back up and auxiliary power services to meet continuity requirements.
Faced with such constraints, some HPC users turn to the general public cloud as a scale-out option. However, public cloud is generally unsuitable for true HPC workloads despite cloud computing’s premise of elasticity for providing additional at-will compute resource for specific workloads.
Cloud may be fine for standard workloads where the amount of CPU, storage or network resources necessary for a specific workload are generally quite definable. However, with HPC it is considerably more complex as there is a need for different CPU and GPU server capabilities; for highly engineered interconnects between all the various systems and resources; for storage latencies to be maintained in the low milli, micro or even nanoseconds. All of this requires highly specialised workload orchestration that is not available on general public cloud platforms.
Attempting to create a true HPC environment on top of a general public cloud is therefore untenable. So yet again, organizations tend to find themselves back at square one, deciding on or reverting to a self-build solution, or making the best of what colo has to offer. A real catch 22 situation!
Key colocation considerations for HPC users:
If the consensus is to take the colocation option the following decision criteria may serve as a useful guide:
Hyper-dense HPC equipment needs high power densities, far more than the average colocation facility in Europe currently provides. The average power per rack for a ‘standard’ platform rarely exceeds 8kW per rack – in fact the average for colocation facilities is closer to 5kW. A dense HPC platform will typically draw around 12kW per rack and in some cases 30kW or more. Can the colocation facility provide that extra power now – not just promise it for the future? Will it charge a premium price for routing more power to your system? Furthermore, do the multi-cabled power aggregation systems required include sufficient power redundancy?
Careful consideration must therefore be given to future-proofing when it comes to power availability to avoid the potential for unplanned downtime, or the disruption and cost involved in the event of migration/de-installation should the facility become power-strapped. Clearly, PUE and carbon emissions credentials will also need evaluation from a cost, carbon tax and CSR perspective.
There will always be some form of immediate failover power supply in place which is then replaced by auxiliary power from diesel generators. However, such immediate power provision is expensive, particularly when there is a continuous high draw, as is required by HPC. UPS and auxiliary power systems must be capable of supporting all workloads running in the facility at the same time, along with overhead and enough redundancy to deal with any failure within the emergency power supply system itself. This is not necessarily accommodated in colocation facilities looking to move up from general purpose applications and services to supporting true HPC environments.
With HPC requiring highly targeted cooling, simple computer room air conditioning (CRAC) or free air cooling systems (such as swamp or adiabatic coolers) may not have the capabilities required.
Even where a modern HPC system may be using in-row cooling, so removing the need for adequate in-facility cooling, removing the heat generated in an effective manner may be a problem. Hot and Cold Aisle cooling systems are increasingly inadequate for addressing the heat created by larger HPC environments which will require specialized and often custom built cooling systems and procedures.
This places increased emphasis for ensuring there are on-site engineering personnel on hand with demonstrable knowledge in designing and building bespoke cooling systems such as direct liquid cooling for highly efficient heat removal and avoiding on board hot spots. This will reduce the problems of high temperatures without excessive air circulation which is both expensive and noisy.
Consider the availability of diverse high speed on-site fibre cross connects. Basic public connectivity solutions will generally not be sufficient for HPC systems so look for providers that have specialized connectivity solutions.
The HPC platform may be working well; all access devices may be working; the public internet is working. However, what if the link between the organization or the public internet and the colocation facility goes down and there is no capability for failover? As many problems with connectivity come down to physical damage, such as caused by cables being broken during roadworks, ensuring that connectivity is through multiple diverse connections from the facility is crucial.
Other areas where a colocation provider should be able to demonstrate capabilities include specialized connections to public clouds, such as Microsoft Azure ExpressRoute and AWS Direct Connect. These bypass the public internet to enable more consistent and secure interactions between the HPC platform and other workloads the organization may be operating.
Last but not least, the physical location of the datacenter will impact directly on rack space costs and power availability. In the case of colocation there are often considerable differences in rack space rents between regional facilities and those based in or around large metro areas such as London. Perhaps of more concern to HPC users, the availability and reliability of power supply will likely vary from region to region. The majority are not directly connected to the grid and several pylon hops from sub-stations. Some facilities in power-strapped areas are already pushed to supply 4kW per rack.
Fortunately, the ever decreasing cost of high speed fiber is providing more freedom to build modern colo facilities much further away from metro areas but without incurring the latency issues of old. Examples here include locations such as the NGD mega data facility in South Wales, where renewable power is in abundant supply (180 MW) and is directly connected to the national grid; and of course some of the emerging facilities in the Nordic region where hydroelectric power is plentiful and low cost.
In summary, look closely enough and commercial HPC users will find a few fit for purpose colocation choices already available in the UK and Europe. Provided, that is, they carefully evaluate the ability of their would-be partners to guarantee the power and back up contingencies required for the duration of the project, and with high levels of redundancy on tap should needs suddenly change or for mitigating risk of any unplanned downtime. Ensuring the engineering team is capable of understanding and delivering bespoke rack configurations and specialized cooling environments is also a major prerequisite.
About the Author
Clive Longbottom is the founder and research director of Quocirca, the UK-based pan-European market analyst firm. Clive covers areas as diverse as storage, servers, operating systems, IT platforms, datacenters, systems management, on-line services, big data and analytics.
Trained as a Chemical Engineer, Clive understands that everything within a business is predicated on process, and that the only point of technology is in making sure that the processes run efficiently and smoothly. As a research engineer for Johnson Matthey he worked on several projects, including anti-cancer drugs, efficient NoX/SoX burners and a long period working on primary energy generation via fuel cells.