This blog was also published on the Amalgam Insights website. As is the case with all new technology, container cluster deployments began small. There were some companies, Google for example,
This blog was also published on the Amalgam Insights website.
As is the case with all new technology, container cluster deployments began small. There were some companies, Google for example, that were deploying sizable clusters, but these were not the norm. Instead, there were some test beds and small, greenfield applications. As the technology proved itself and matured, more organizations adopted containers and the market favorite container orchestrator, Kubernetes. The emergence of Kubernetes was, in fact, a leading indicator that containers were starting to see more widespread adoption in real applications. The more containers deployed, the greater the need for software to automate their lifecycle. Even so, it was unusual to find organizations standing up many Kubernetes clusters, especially geographically dispersed clusters.
That is beginning to change. Organizations that have adopted containers and Kubernetes are starting to struggle with managing multiple clusters spread throughout an enterprise. Just as managing large amounts of containers in a cluster was the impetus for orchestrators such as Kubernetes, new software is needed to manage large scale multi-cluster environments. At the same time, Kubernetes clusters have been getting more complex internally. From humble beginnings of a handful of containers with a microservice or two, clusters now include containers for networking including service mesh sidecars and data planes, logging, app performance monitoring, database connectivity, and storage. All that is in addition to the growing number of microservices being deployed.
In a nutshell, there are now a greater number of larger and more complex Kubernetes containers clusters being deployed. It is no longer enough to manage the lifecycle of the containers. It is now necessary to manage the lifecycle of the cluster itself. This is the purpose of a Kubernetes control plane.
Kubernetes control planes comprise of a series of functions that manage the health and well-being of the cluster. Common features are:
- Cluster lifecycle management including provisioning of clusters, often from templates for common types of clusters.
- Versioning including updates to Kubernetes itself.
- Security and Auditing
- Visibility, Monitoring, and Logging
Kubernetes control planes are policy driven and automated. This allows operators to focus on governance while the control plane software does the rest. Not only does this reduce errors but allows for faster responses to changes or problems that may arise. This automation is necessary since managing many large multi-site clusters by hand would require large amounts of manpower and, hence, cost.
Software vendors have stepped with products to meet this emerging need. In the past year, products that implement a Kubernetes control plane have been announced or deployed by Rancher, Platform9, IBM’s Red Hat division (Advanced Cluster Management) , and VMware (Tanzu Mission Control) and more. All of these Kubernetes control planes are designed for multi-cloud, hybrid clusters and are packaged either as part of to a Kubernetes distribution or an aftermarket addition to a company’s Kubernetes product.
Kubernetes control planes are a sign of the normalization of container clusters. The growth both in complexity and scale of container clusters necessitates a management layer that helps DevOps teams to more quickly standup and manage clusters. This is the only way that platform operations can match the speed of Agile development and automated CI/CD toolchains. It is yet another piece of the emerging platform that will be where our modern cloud native applications will live.