Kubernetes 1.4 makes container orchestration bigger — and simpler
Version 1.4 of Google’s Kubernetes, which is fast becoming the de facto standard for running containers at scale, was released yesterday afternoon.
Compared to Docker Swarm, the out-of-the-box clustering solution for Docker containers, Kubernetes has a reputation for being more powerful but more difficult to manage. Version 1.4 is aimed as much at making Kubernetes less burdensome to set up and work with as it is at expanding the container platform’s feature set.
Kubernetes has established itself as the go-to cluster solution for containers. Further clinching that reputation, Canonical has now elected to use the platform for its future container and clustering projects.
Set-up and maintenance
Most of the burden of working with Kubernetes, as with most large, distributed software applications, falls into two categories: set-up and maintenance. Some software vendors have automated those functions; Mesosphere’s DCOS, for instance, can install and manage Kubernetes. But there has been room for improvement in making Kubernetes itself less complex.
A constant issue with containerised applications is how to manage state for the apps. There’s certainly no shortage of options — relational databases, key-value stores, container volumes, and network-attached storage all come to mind — but the details are devilish.
A class of new features in Kubernetes 1.4 are aimed at better supporting stateful applications; many are expansions of existing Kubernetes features for managing state. Kubernetes has long had volumes, which helps store data persistently with a running node. But version 1.4 adds features, such as the ability to dynamically provision volumes for a given node to meet application demand.
Another new feature, init-containers, lets you ensure that containers start in a certain sequence — for instance, to spin up a database before starting its attendant app. You can now also run scheduled jobs within a Kubernetes cluster, although the feature is considered an alpha-level addition for now.
The conventional wisdom about running clusters is that you do not do half measures, so Kubernetes has come to be associated with big customers running big workloads. Consequently, some of the additions in Kubernetes 1.4 is aimed at this crowd.
A passel of Kubernetes functions, some new, some promoted to beta, are for cluster federation or building clusters that live in more than one geographic region or physical location. Federated Ingress, for instance, allows inbound connections to be routed to the cluster nearest to the request, although right now that feature is tied closely to functionality in Google Cloud.
But federated Ingress is still an alpha feature, and the sheer number of third parties that contribute to Kubernetes pretty much guarantees this feature won’t be exclusive to any one cloud once it’s out of beta.
One measure of Kubernetes’ ongoing success is that third parties have built other product lines on top of or around it. OpenStack, by way of Mirantis, is being reworked so that it can be deployed in containers on Kubernetes. After all, why invent an orchestration system from scratch when there’s one already available with momentum and support behind it?
The thought seems to have occurred to Canonical. The day after Kubernetes 1.4’s release, the maker of the Ubuntu Linux distribution announced its own distribution of Kubernetes, outfitted with tools like “a fully integrated Elastic stack including Kibana for analysis and visualisations,” according to Canonical’s press release. Aside from Canonical’s additions, the core is stock Kubernetes kept in sync with the original codebase.
Canonical’s product is still officially a beta, which implies the company is taking time to make sure all of its additions are properly supported in the 1.4 release. But Canonical has accepted the message that Kubernetes and its developers have been broadcasting for some time: Don’t reinvent, just implement.
IDG News Service