Kubernetes 1.10: What’s New?

Introduction

Kubernetes 1.10 is set to be released on Wednesday, March 21. As has been the case over the past year, this release is characterized by incremental but significant enhancements in multiple functional areas. This is more evidence of the reality that Kubernetes is maturing nicely, and that recent releases are not dominated by any single large feature, but are rather built from a steady improvement of many components governed by mostly autonomous teams. The stability of the platform and its readiness for production contributed to CNCF’s decision to graduate the project out of incubation stage.

As a Kubernetes user, if you are a developer, a cluster operator, or both, the Kubernetes 1.10 release brings new features and fixes that we thought worth previewing. When a change makes it into the release, it is reflected in the Kubernetes change log. While the change log is the authoritative source, it doesn’t include changes in progress. To find these, look at the list of open Kubernetes issues and pull requests grouped under a milestone. As of February 15, this list had 76 items. Some items represent individual pieces of work in a larger effort, while others are themselves an “umbrella” that captures all related work. All items are labeled with the name of the Kubernetes Special Interest Group (SIG) responsible for bringing them to completion. If, for example, you want to find Windows-related work, you can filter items by the “sig/windows” label.

So What’s New in Kubernetes 1.10?

We find that the majority of the work in the 1.10 release is more important for cluster operators than developers. Many of the changes are bug fixes and internal refactoring, with the aim of stabilizing the Kubernetes core and improving the release velocity of other components. Here is our look at the highlights from the points of view of a cluster operator and a developer.

Kubernetes 1.10: What’s New?

Operator’s Perspective

Kubernetes 1.10 brings feature parity and fixes to Kubernetes on Windows, and graduates some key APIs.

FlexVolume plugins are now supported on Windows nodes. The FlexVolume volume type lets you use a custom storage driver. In addition, the driver lifecycle can be managed independently of the cluster lifecycle; no need to stop and start kubelets or kube-controller-manager to upgrade the driver. (Pull request to enable flexvolume on Windows node)

The kubelet config API introduced in v1.8 graduates from alpha to beta. Prior to this API, kubelets had to re-configured out-of-band, but can now be configured using ConfigMaps. A similar API for kube-proxy is also graduating from alpha to beta. The Audit API is also graduating to GA. You can use this API to generate audit logs for all API requests and export them to a pluggable backend.

A new Node “shutdown” taint is added. If your cluster is integrated with your cloud using a cloud provider, you may have noticed that Kubernetes does not detach storage volumes from nodes reporting NotReady. Pods are rescheduled to another node, but cannot use their storage volumes there. This is because the cloud provider does not know whether it is safe to detach storage volumes from the node reporting NotReady. Once your cloud provider supports this new taint, it will use it to determine when detaching (and reattaching) a storage volume is safe.

You will be able to configure the API server to use a custom TLS cipher suite. The default cipher suite aims to balance security and compatibility. If you prefer having less compatibility to accepting some less secure ciphers, you can do so by defining your own cipher suite.

Last but not least, let’s look at the “Action Required” items. Included in the change log of every release, these call out changes that may be backward incompatible with your cluster. So far in this release, the kubelet “–cloud-provider=auto-detect” feature is removed, and webhooks no longer skip cluster-scoped resources. If you have not already deployed the cloud-controller-manager component, plan to do so soon. To decouple cloud provider implementations from the Kubernetes code, they will be moved from kube-controller-manager to  cloud-controller-manager in a coming release, and eventually move out into individual binaries.

Developer’s Perspective

Developers will find a number of small but welcome improvements, including support for binary data in a ConfigMap, a “kubectl set volume” command. There are no major features, but on the other hand there are no breaking changes, either.

Add or remove volumes to a deployment with a kubectl command: “kubectl set volume.” The “kubectl set” commands update a deployment, triggering a rolling update. Past releases added support for updating the image, selectors, service account, environment, and resource quotas. Future releases will bring support for updating liveness and readiness probes, security contexts, and ports. (Pull request for add new command kubectl set volume).

You can now use store binary data in a ConfigMap. Previously, if you wanted to pass a Java keystore (a binary file) to your application using a ConfigMap, you would have needed to implement your own Base64 encoding/decoding. This encoding/decoding is now built in, and easy to use: “kubectl create configmap –from-file”.

What does Kubernetes 1.10 mean for the Enterprise?

The new release will improve enterprise readiness in a few ways:

    • The Node “Shutdown” Taint improves overall reliability for stateful workloads, reducing the likelihood of stuck volumes after shutting down volumes.
    • Security
      • The graduation of the API Audit Logging feature represents a maturing of the observability of Kubernetes, thereby improving the ability to troubleshoot and audit it in production
      • The “custom cipher suite” capability (see above) makes one aspect of security configuration more flexible
    • The binary data support in ConfigMap improves Kubernetes’ compatibility with Java, which is prevalent in production enterprise environments
    • Operational simplicity increased due to incremental improvements to the kubeadm tool, which exposes an easy-to-use CLI to deploy and manage the lifecycle of clusters. Example: 56084 is a step towards enabling kubeadm to deploy multi-master (HA) clusters, and 58259 improves Kubernetes compatibility with more external cloud providers.
Platform9

You may also enjoy

Kubernetes FinOps: Resource management challenges

By Joe Thompson

Exploring Platform9 Managed OpenStack as a modern virtualization alternative

By Peter Fray

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Learn the FinOps best practices to maximize your cloud usage & budget:Register Now
+