We are living in an age of Cloud-Native applications. The rise of microservices and the exponential growth in the scale of applications are fundamentally transforming modern software development and deployment. While this transformation is occurring first and foremost for applications that are born in the cloud, these days nearly any application is distributed, and requires auto-scaling and auto-failover and redundancy.
The evolution in modern software delivery is affecting the way software is being shipped – by both those who deliver a SaaS-based application that is accessed remotely, over the internet, as well as by traditional software vendors who ship software that needs to be installed locally. Let us explore how.
While we do live in a SaaS, on-demand world, still – not all software today is being accessed “in the cloud”. While we no longer send CD-ROMs to customers with the installation file burnt on them, much of the software that we use today is still being delivered – and will continue to be delivered for a while – as an executable that is installed locally. We still use Visual Studio code, a myriad of desktop applications, organizations often prefer to have software installed internally, behind their firewall, and many security-conscious companies and government agencies use Github Enterprise instead of github.com. In addition, there are even more use-cases for shipping software for the IoT world where edges are not in a cloud environment.
Kubernetes applications on-prem
Applications can be delivered and consumed either as a SaaS product, or as a local installation – and this is true for both traditional 3-tier applications or desktop services, as well as for the newer container-based applications.
Ever since we released the first version of our Managed Kubernetes offering, we’ve seen many interesting companies using Kubernetes in a variety of ways – from mobile applications, video processing to AI. While some of these companies do run their Kubernetes infrastructure on the public cloud, many large enterprises also have Kubernetes-based applications that are for internal consumption, running on internal, on-premises hardware.
An interesting use case we’ve seen in this regard is from software providers that are adopting Kubernetes as the underlying platform for deploying their applications.
We primarily see two types of companies in this space:
1. SaaS Providers – going on-prem
These are companies offering an application that is consumed by end-users as a SaaS/cloud service. Most often, these applications are hosted in the cloud – either public or private – and are accessed by users via a web-portal.
Since many of these applications are cloud-native, we see companies that have chosen Kubernetes as the underlying platform for the app, primarily to enable portability (more on this later). For example, I recently worked with a company that provides AI-as-a-Service, and another company providing an analytics platform that relies on data stored in the public cloud (Azure or AWS object storage) – both have designed their services as Kubernetes-based.
Some of these SaaS providers deliver products that are used by Development and Operations teams in large enterprises (think GitHub Enterprise, or other software delivery, security, and data processing solutions.) As they grow their customer base to include fortune 1,000 companies and government agencies, often the requirement arise to access the service as a private SaaS offering, installed behind the firewall and operated internally.
Since these SaaS vendors already use Kubernetes as the backend for their applications, it is easier to use the same technology for running their on-premises version as well (rather than having to re-architect the application, have a separate product line, or compromise some of the key benefits that have led them to choose Kubernetes in the first place.)
So how do you run Kubernetes-based applications on-prem, as a private container offering?
2. Traditional Software Vendors – going cloud
These vendors often did not start in the cloud, and have been successful primarily with providing customers – particularly large enterprises – software that is installed either on local machines or on-premises data centers. They are now looking to expand their market opportunity and provide their software as a SaaS offering. Many of them are looking now at containers and Kubernetes as the underlying platform to enable them to modernize their offering and cloud-ifying it – so it is “cloud-native” (or “cloud-aware”.) This, too, prompts them to look for Kubernetes offering across different infrastructure – so that they can continue to develop and release their product both as a SaaS solution, as well as a local installation.
Kubernetes as an Operating System
Kubernetes has emerged as the platform of choice for deploying cloud-native applications. In essence, Kubernetes is emerging as an Operating System (not in the classical sense, but from the perspective of a distributed, cloud-native application.)
It is easy to see why. Kubernetes provides many of the features that are critical for running a cloud-native application:
- High Availability: Automatically start another instance if a service instance fails
- Granular, infinite, scalability: A group of services run behind a load-balancer, giving the ability to scale each service individually.
- Rolling Upgrade: You can upgrade each service independently of others, with rolling upgrade and rollback.
- Discovery: Discover other services through name service.
- Portability: Thanks to the containers and the availability of many ‘Kubernetes-as-a-Service’ platforms, Kubernetes-based applications are portable across any environment or infrastructure provider that uses Kubernetes.
While developers certainly care about these capabilities, you can see how crucial these are – particularly the last point – to IT Operations.
It is easy to assume that Kubernetes by itself provides all the necessary features and services that would allow you to easily, and consistently, use it both in the cloud as well as on your own, on-premises infrastructure. The reality is far from it. There are critical gaps, specifically when you consider that majority of enterprises operate in a hybrid/multi-cloud environment and need to support, on average, 5 different clouds or several datacenters- in the cloud and internal.
Public Cloud Providers and Consistency
Different cloud providers provide their own version of Kubernetes. When you have different vendors providing a service (even with Open source software like Kubernetes), the consistency between different providers becomes questionable. For example, Amazon EKS supports Kubernetes version 1.10.3, Google’s GKE supports the last 3 releases and Microsoft Azure still seems to be on version 1.9.
In addition to the different Kubernetes versions, the underlying OS (kernel version) also differs. This may or may not pose a problem to most application, but worth considering, especially when security and compliance are important.
Perhaps the biggest hurdle is the tight coupling and dependency of various cloud provider between Kubernetes and other services – like networking, storage, monitoring, authentication, and more. These related services are different, and are integrated differently, from cloud to cloud. So, for example, a Kubernetes application on AWS would integrate with certain AWS-specific services – for example it uses cloud-watch monitoring or IAM for authentication – that are different from comparable services that are available on a different cloud provider.
While there is a lot of buzz around Kubernetes solutions provided by public cloud provider, KubeCon survey results show there are also real use cases and adoption already happening in on-prem data centers. From CI/CD to security-sensitive applications, bare-metal occupies > 50% of the container workload.
While there are solutions available to create a single Kubernetes cluster on a private cloud, there is a clear lack of solutions that can provide Multi-Clusters or a simple Kubernetes-as-a-Service (KAAS) on these bare metal servers, that are typically inside a private data-center.
Managed Edge Cloud
With the advent of smart devices like Alexa, Nest, Ring, IoT becomes ubiquitous in our always-connected world. Similarly, Point-of-Sale (POS) devices in retail store present a use case, not unlike that of IoT – of silo’d, small computing footprint at the edge. As Kubernetes becomes the de-facto Operating System of choice, there is a need for managing these ‘edges’. See the following blog from Chik-Fil-A describing their experience with running such clusters at their local stores.
As you can imagine the challenges here are different, mostly around auto-deployment of clusters, recovery, backup in case of failure and remote management for troubleshooting and upgrades.
While managed-edges are great for retail use cases, there is a whole class of use cases where Kubernetes is being used as an underlying OS for deploying distributed applications that are installed locally – for example, Kubernetes-based applications that are behind a firewall, etc… The challenges here are similar to Edge use cases, requiring that a Kubernetes cluster comes up:
- Without any internet connection (i.e cannot download images dynamically)
- Almost without user interaction: so that the auto-deploy of a cluster on a fixed set of nodes is done automatically
- With high availability built-in: requires multi-master support for failover.
- With backup and recovery built-in: That it already has the tools and processes for automatically backing up the state of the cluster and recovering from it.
- In an“OEM-able” state: so that the end user only sees and interacts with the application itself, while the underlying Kubernetes infrastructure is completely invisible.
For the reasons outlined above, many of us have already made a decision to use Kubernetes as the Operating System- for both our SaaS/cloud applications, as well as for our on-premises, locally installed software.
To accelerating development, simplifying updates and ongoing operations, and reducing costs – you want to limit your software/management variations. To ensure success, you first need to make sure you have consistency and interoperability of the application code and management processes between the different types of infrastructure. So that regardless of where the app is running – private cloud, public cloud or edge appliances – it is always developed, deployed and operated in the same way.
You need to choose a solution that would help you:
- Ensure fidelity across different infrastructure: This means one API to deploy and maintain clusters across different infrastructure including cloud providers.
- Give you the control you need: Be it the choice of network or storage provider, or simply using SAML2 for authentication, you should be able to integrate with the processes and tools of your choice.
- Managed Service: You want to focus on your application and not on the infrastructure management – so choose a solution that manages your clusters footprint across different infrastructure for you- including version upgrades, patches, performance optimization, and more.
- Un-Managed: Lastly since you are releasing software to your customers, pick a solution that will let you OEM the solution so it can be delivered as a self-sufficient, ‘boxed’ platform on an environment or edge appliance that may not have any internet connectivity.
While there are some vendors that provide some of the recommended capabilities above, only one solution provides you everything you need in order to standardize on Kubernetes as your operating system- no matter where you’re running.. I encourage you to look at our Managed Kubernetes offering.
Stay tuned for the next installment in this blog series, where we go deeper into the Kubernetes at the Edge/appliance use cases. I will share best practices and a blueprint for designing your Kubernetes underlying infrastructure in a way that is optimized specifically for Independent Software Vendor (ISV) to provide the scalable architecture for your application – on any infrastructure.
Before co-founding Platform9, Roopak was a technical lead at VMware, where he helped architect and ship major vSphere products: Update Manager and vCloud Director. Before VMware, Roopak was an early engineer at an early stage Mobile computing startup.
Outside of work, Roopak is a fan of audiobooks, likes cooking, following sports, and keeping up with his kids on the soccer field.
Latest posts by Roopak Parikh (see all)
- Edge Computing: Challenges and Opportunities - May 14, 2019
- Introducing the Industry’s First Managed Kubernetes Service on VMware - February 4, 2019
- Kubernetes as a Cloud-Native Operating System – On-Premises, Too! - July 20, 2018