I’ve no doubt that familiar themes will be circulating the halls of Supercomputing in Denver, echoes of last year’s show – how to survive in the post-Moore’s Law era, the race to exascale, how to access quantum computing. But this year I think there will be another overarching theme added to coffee queue chat: how to cope with the new norm, the HPC storage cocktail.
I’m referring to a practice that more and more people are considering: mixing different environments as well as on-prem and cloud platforms to make storage spend go as far as possible. As the new architecture from ARM gains traction and more and more people look to cloud platforms to boost their on-premise clusters, there’s no doubt that the question of how to make these systems work together effectively will be on people’s lips.
Mixing storage systems can throw up real problems, or uncover problems that until that point have been hidden. For example, moving to a new environment can expose I/O problems that weren’t there before including bad I/O patterns such as small reads and writes that can look like CPU activity until the I/O is profiled. An organisation won’t be able to feel the benefit that investment in a new storage system should bring unless the bridge between the existing system and the new can be fully understood.
At SC, this issue will certainly be addressed and there will be the usual rainbow of storage solutions and add-on technologies to help. Our team are looking forward to learning about new solutions that are emerging to help organisations to manage mixed systems. It’s still early days for many people to have adopted this type of environment, but we’ve already spoken to a lot of people who are testing the water with hybrid cloud environments.
At this stage most organisations we work with are selecting specific projects to migrate to the cloud and thinking about new storage architectures that they can exploit with that move. Object storage has a set-up cost, but with potentially good long-term cost savings I expect that a lot of vendors will be pushing that for on-prem deployments as well.
Containerization is another flavour to add to the mix. Most people are looking at Docker or Singularity as the two main options sitting on top of various platforms such as OpenStack or Kubernetes. While Singularity is little known outside the HPC community, from a high level it seems to better support some of the data demands of HPC applications, but obviously doesn’t have such a developed ecosystem around it as Docker. This year’s SC might be the year that more make the leap to deploy it in production and see how it measures up.
Another trend I believe will be that we will see far more people treading the halls of SC who might not have been there in previous years. Big data and the growth of AI mean that more and more industries are looking to what has been considered HPC storage to provide the big compute that they need to run their applications and programs.
These trends all feed into each other. The presence of these newcomers with their different views on hardware and software is no doubt speeding up the growth of cloud platforms in the traditional HPC storage market, which is no bad thing. We could all do having our viewpoints shaken up.
In general, we are heading into an era with more variety, more competitive platforms, serving a greater and more diverse range of customers. This could well be the most exciting SC yet as just a few of the opportunities that this cocktail presents start to become apparent.
About the Author
Dr. Rosemary Francis is CEO and founder of Ellexus, the I/O profiling company. Ellexus makes application profiling and monitoring tools that can be run on a live compute cluster to protect from rogue jobs and noisy neighbors, make cloud migration easy and allow a cluster to be scaled rapidly. The system- and storage-agnostic tools provide end-to-end visibility into exactly what applications and users are up to.