Kubernetes is an open-source solution for automating deployment, scaling, and management of containerized applications. The business value provided by Kubernetes has been extending into the Serverless world as well. In general, Serverless <3 Kubernetes – with Kubernetes being the ideal infrastructure for running Serverless applications, because of a few key reasons:
1. Kubernetes allows you to easily integrate different types of services on one cluster
From a developer’s standpoint, apps typically incorporate multiple types of components. Any complex solution will use long lived services, short lived functions, and stateful services. Having all the options in one Kubernetes cluster lets you use the right tool for each job and still be able to easily integrate things together (whereas separate clusters will add both an operational and cost overhead.) FaaS works best in combination with other apps that runs natively on containers, such as microservices. For example, it may be the right fit for a small REST API, but it needs to work with other services to store State, or be suitable for event handlers based on triggers from storage, databases, and from Kubernetes itself. Kubernetes is a great platform for all these services to inter-operate on top of.
2. Kubernetes is great for building on top of
It provides powerful orthogonal primitives and comprehensive APIs.
3. You can benefit from the vibrant Kubernetes community
All the work being done in the community in areas such as persistent storage, networking, security, and more, ensures a mature and always up-to-date ecosystem of enhancements and related services. This allows Serverless to take advantage of things like Helm, Istio, ConfigMaps, Secrets, Persistent Volumes, and more.
4. Kubernetes allows container-based applications to scale reliably and in a cost-effective manner
By clustering the containers within a container manager where they can be scheduled, orchestrated, and managed, Kubernetes reduces operations cost considerably when compared to not using a cluster manager, and greatly increases the reliability of your service.
5. Kubernetes’ scheduler and cluster management, service discovery, networking:
All are required in a FaaS framework, and so by running Serverless on top of Kubernetes you avoid having to re-invent things, and can focus on the serverless functionality, leaving container orchestration functionality to Kubernetes.
6. Kubernetes provides portability
Kubernetes has emerged as the de-facto standard for container orchestration across any kind of infrastructure. It thus ensures a consistent and native experience across cloud providers and across environments– from staging, to UAT, to Production. This enables true portability across any infrastructure – private or public. (Keep in mind, though, that as we’ve seen in Part 2 of this series, depending on your chosen Serverless framework, the application may need to be re-written if it is to be migrated to a different environment, not because of the Kubernetes backend, but because of the lock-in to integrated services provided by a specific cloud provider.)
Challenges with Kubernetes for Serverless applications:
While Kubernetes is a great underlying orchestration layer, it does require extensive set up and management overhead. There is still significant amount of software “plumbing” to be built before deploying a Serverless application, even with Kubernetes. The code/function has to be written, the code has to deployed, containers need to be built and registered and then various configuration steps on Kubernetes (e.g. deploy, service, ingress, auto-scaling, logging) have to be carried out. Kubernetes is so robust and complex to manage, that it poses challenges to Ops in terms of the learning curve and the operational complexity – that hinder Serverless adoption, particularly for on-prem environments.
What is needed is a solution to help with reducing the time and effort spent on “plumbing” the Kubernetes infrastructure required for developing Serverless applications. What would be ideal for developers is to have a framework where functions are deployed instantly with one command. There should be no containers to build and no Docker registries or Kubernetes clusters to manage. Developers should focus only on “the code”, while the complex steps involved in packaging, deploying and managing applications are automated by the Serverless framework, while being entirely native to Kubernetes.
Enters the open source Kubernetes-native Serverless framework, Fission.
More on that, on our next blog in this series.
In previous roles, Vamsi was the CTO for RiskCounts - a FinTech based in NYC. Prior to that spent eight years as the Chief Architect for Red Hat’s Global Financial Services Vertical based out of NYC. Vamsi also spent two years as the General Manager (Financial Services) at Hortonworks. In both roles, Vamsi was responsible for driving Red Hat and Hortonworks technology vision from a client business standpoint. The clients Vamsi engages with on a daily basis span marquee financial services names across major banking centers in Wall Street, Toronto, London & in Asia. These include businesses in capital markets, retail banking, wealth management and IT operations.holds a BS in Computer Science and Engineering as well as an MBA from the University of Maryland, College Park. He is also a regular speaker at industry events on topics ranging from Cloud Computing, Big Data, AI, High-Performance Computing and Enterprise Middleware. In 2013, Wall Street and Technology Magazine identified Vamsi as a Global Thought Leader. Vamsi writes weekly on financial services business and technology landscape at his highly influential blog – http://www.vamsitalkstech.com