Kubernetes is widely recognized as a platform that enables highly efficient use of infrastructure, but organizations need to understand those benefits are maximized when the developer experience itself is optimized.
Developers are increasingly assuming responsibilities for systems operations. In the past, it was common to have a separate team of systems administrators responsible for deploying applications, monitoring resource use, and responding to incidents that disrupted services. Developers who use agile methodologies are more likely to employ practices that include responsibility for ensuring their software operates efficiently and reliably.
This is understandable, since one aspect of agile engineering practices is the frequent release of new versions of services. Rather than hold up the release of an update so that multiple features can be included, it’s more efficient to release small changes continually.
Continuous integration/continuous delivery (CI/CD) pipelines, coupled with version control platforms that promote collaboration, enable this kind of rapid release of new features. It also means that the developers who are working in the code and revising it are in the best position to understand the cause of performance or reliability problems.
Building a Proper DevOps Platform
Developers depend on a stable environment to work. This entails high uptime, reliability, and performance. Platform engineering teams should treat the platform as a product. They provide this platform for developers, enabling them to create services for their customers.
This includes building teams, processes, and a culture that continually improves—not just sustains—the platform. Using agile approaches, developers can deliver initial applications on a platform they manage— but expect to have platform engineers take over responsibility for the platform.
Kubernetes can help deliver the optimal engineering experience. It’s designed to automate and orchestrate reliable computing resources for containerized applications. One important aspect of Kubernetes is that it can be deployed in multiple environments, including:
- Centralized data centers, either on-premises or colocated in a third-party data center
- Micro data centers used in remote offices
- Point-of-presence locations such as retail stores
- Edge computing settings for Internet of Things (IoT) deployments
- Why and How to Run Machine Learning Workloads on Kubernetes - July 23, 2021
- Learning About Kubernetes Admission Controllers and OPA Gatekeeper - July 20, 2021
- Fargate vs. Kubernetes: Which Container Orchestration Solution Do You Need? - July 19, 2021
This ability to deploy Kubernetes to a wide variety of environments is a significant advantage over deploying customized, case-specific servers. With a single, common platform for executing workloads, developers can spend less time on operational issues with the help of tooling that supports the Kubernetes platform.
Security and Compliance Concerns
Consider the challenges of complying with regulations and policies while maintaining an agile, rapid-feature-delivery engineering environment. There are multiple dimensions of compliance that must be attended to.
For example, developers, who are also operations managers, need tools to help ensure authentication mechanisms are in place. In many cases, authentication and identity management services are provided by a centralized service that needs to be accessible from various Kubernetes deployments.
Highly distributed systems like Kubernetes are constantly generating, storing, and transmitting data. Many regulations governing privacy and the control of sensitive information have rules about protecting the confidentiality of data. To meet these requirements, it’s a best practice to employ encryption for data in motion and data at rest.
Kubernetes environments should be deployed in ways that provide these encryption services by default. Application developers shouldn’t have to learn the intricacies of configuring full disk encryption or setting up TLS connections between nodes. Role based access controls (RBACs) are essential for securing the platform. Given the large number of services and tenants, this can be a difficult task and requires tooling to support and maintain proper RBAC configurations.
Kubernetes should be deployed with controls in place to support other governance requirements. For example, security scans should be configured to run reliably on all clusters. Again, this is a necessary capability, but not one that should require significant developer time.
Tooling should be in place to help with capacity planning and cost control. Kubernetes is designed to allocate resources to workloads that need them. Those resource demands can, and often do, change over time, so it’s important to monitor resource utilization and growth rates in workloads. If a cluster has insufficient resources, developers may be forced to limit features or find other workarounds to deal with the lack of capacity. Poor capacity planning can introduce significant friction in the development process and slow the creation of new services.
Keeping the Promises
The promise of Kubernetes to more efficiently employ computing and storage resources is best realized when you take into account how Kubernetes is used and maintained by developers. Kubernetes is complex, and as responsibility for managing clusters moves from a small number of clusters in a single data center to hundreds or thousands of distributed clusters, there’s a risk of not knowing how to run such a distributed platform optimally.