A classical problem in software development is how you manage your software applications dependencies. This extends all the way from the time you program your application to the time you run or deploy it. The typical application is almost always dependent on specific versions of libraries, compilers or the OS level package management system.
Organizations deploying big data and HPC workloads are looking for solutions that can be designed and developed once, then deployed anywhere, on any scale, and on any technology available without any manual tweaking or installation steps.
To work with strictly regulated datasets mixed and spread across different implementations and environments (such as in the financial, government and healthcare sectors), applications need to be independent from the particular infrastructure and technologies of either public and private clouds, and able to utilize both.
Data processing workloads benefit from abundant and cheap public compute resources, but HPC projects more commonly need to navigate regulations that require operations to be isolated to local and private infrastructures.
If we’re to draw a picture of the geographical locations of the biggest cloud players on the market, we can recognize that the locations are quite few. This means that a big part of the cloud’s Internet traffic is going to and coming from a handful of locations, that are almost all concentrated in the western countries of the world.
The Internet of Things however has brought forward the rise of increasingly mobile, location-agnostic technologies, and is seeing rapid adoption in locations where their benefits can most prominently be realized: areas and countries that lack traditional, well-established infrastructure for ICT. In other words, non-western countries.
Most recently, cloud adoption has focused on public and private cloud deployment models, based on the now accepted mainstream forms of cloud computing. The next phase of this process however is the paradigm of the hybrid cloud, and the need to decentralize our applications to a greater extent than with the more centralized forms of public or private cloud deployments to which we are accustomed. This new paradigm of cloud will make it just as easy to provision or set up a cloud infrastructure as a regular white box Linux machine with the help of technologies building on containerization such as Docker and CoreOS.
This kind decentralized approach isn’t new. SETI (the ‘Search for Extraterrestrial Intelligence’ project) started amassing huge amounts of data sets in the 1980s, but lacked the resources and technology for processing it. In response, they developed an application during the 1990s that was freely downloadable for the public, and utilized the users’ private “infrastructure” (the unused computing power of desktop computers) as data processing nodes for the project.
Of course technology has come a long way since then.
CoreOS and Docker are perfect companions in implementing this kind of distributed and interoperable hybrid architecture. Hadoop is an example of a solution that can be well containerized into a system like this to facilitate deployment and automate installation. Containers are capable of reducing the overhead that makes traditional virtual machines an unsuitable solution for HPC. The simplified architecture of CoreOS and structure of Docker containers complete each other into well-tuned application delivery system, with an underlying distributed storage solutions, like Hadoop’s distributed file system, HDFS.
In addition to the core technologies of containerization, there are exciting projects that show great promise and align really well with distributed solutions. Kubernetes is capable of managing clusters of Linux containers as a single system, and Apache Mesos provides a distributed kernel in order to abstract compute resources to build fault-tolerant and elastic distributed systems.
With less overhead, access to bare metal provisioning, and a complete set of technologies to complement applications in a distributed, scalable environment, next-generation hybrid clouds will become a desired environment for HPC looking at resource efficiency as a crucial benefit.
Even though these projects are still really young, they show tremendous promise in being able to deliver this in a very simple and elegant manner. They’re able to run in both private and public cloud environments, and to scale into both to maximize performance and efficiency.
This trend is driving a consolidation of different kind of IT workloads, such as big data and HPC applications to a next generation distributed cloud architecture. Finally the end result that this technology can deliver is a vast, global, interconnected “cloud of clouds,” and the ability to seamlessly deploy applications capitalizing on containerization, globally.
Hailing from an open source world, and largely developed (and utilized) in research environments – not unlike the beginnings of the UNIX/Linux history – these new technologies enable a much more leveled playing field and open market. Easy access to standardized, commodity software components will make it just as easy to set up a cloud infrastructure as a regular white box Linux machine.
In addition, because of technologies that are both commodity and standardized across the industry, HPC and big data crunching applications will be able to be deployed with the click of a button or from a simple command in a wide variety of infrastructures ranging from virtual machines to bare metal, from private to public cloud deployments or specialized local clusters. HPC developers and businesses will be able to create containerized, packaged applications that can run on any scale – maximizing the efficiency of available resources.
About Tryggvi Lárusson, Co-Founder & Chief Technology Officer
Tryggvi is an expert in the architecture of enterprise web applications, specializing in storage and network systems for hardware and virtualized environments. At GreenQloud, he’s focused on enabling the convergence of cloud application development and systems operations. Prior to founding GreenQloud, Tryggvi was the Co-founder, Chief Technical Officer and Chairman of Idega Software, providing web solutions for eGovernment. He studied Software Engineering of Distributed Systems at the Royal Institute of Technology in Sweden, and Computer and Electrical Engineering at the University of Iceland.