The Gorilla Guide to Serverless on Kubernetes – Chapter 5: Why Run Serverless on Kubernetes?

This is an excerpt from The Gorilla Guide to Serverless on Kubernetes, written by Joep Piscaer.

Previous chapters:

You can download the full Gorilla Guide here.

It is like Peanut Butter and Jelly

Kubernetes is an open source solution for automating deployment, scaling, and management of containerized applications. And running on top of Kubernetes means it is very portable, so it will run anywhere Kubernetes runs: on your laptop, in a public cloud, in an on-premises data center or in a managed Kubernetes service provider.

As organizations start to adopt cloud services, they will start using a variety of services, from different vendors, in conjunction with existing workloads in the on-premises data center. Having a single serverless experience across all those is good for the developer, as it means doing away with the various services they would otherwise have had to learn. This enables the developer or dev team to get things done more quickly, deliver higher quality code and minimize additional training efforts.

The Fission serverless framework handles container lifecycle duties like creating and building VMs, abstracting away most of the complexity of operating a Kubernetes environment. Fission itself runs as a set of microservices on top of Kubernetes.

A well-designed serverless-based application architecture is inherently scalable, especially when deployed to an elastic capacity provider, such as the public cloud or a large IaaS provider. By utilizing the intelligence in the underlying Kubernetes platform, functions can automatically scale up and down horizontally.

Avoiding Lock-In

Most current serverless offerings are services: they’re part of a portfolio of technologies in one of the major clouds, and run as part of that ecosystem. These commit customers to the cloud provider’s ecosystem, forcing them to use cloud-specific services. This means that functions in one cloud aren’t portable to another cloud provider, requiring refactoring when moving a function to a different provider.

Other providers try to tie you into their ecosystem in subtle ways by nudging users to use ecosystem-specific services and technologies. Fission takes the opposite route, offering all the benefits of serverless without any of the cost or lock-in. It’s multi-cloud, multi-tool friendly, enabling developers to choose the best tool for the job, instead of forcing the default options in a given cloud ecosystem. Fission gives maximum freedom in the developer continuous integration (CI) and continuous delivery (CD) pipeline, as well as production tooling such as monitoring and tracing.

Note: Lock-in like this happens in subtle ways; for instance, being forced to use the packaged (and often monetized) monitoring solution. This almost defeats the purpose of Kubernetes, which is a freely available technology that works across clouds, in on-premises environments and locally. Fission prevents this lock-in and dependency and promotes decoupling, re-use of code, and portability, all of which reduce friction during the lifetime of the function. This allows developers to modernize applications, even when using on-premises infrastructure.

The Fission advantages go beyond lock-in, too. For example, many organizations with existing investments in private data centers end up with spare server capacity, like CPU and memory; that’s just the way physical hardware is bought. Fission can run on this idle and already paid-for server capacity, effectively giving you a free FaaS platform. Free serverless is a major advantage to the pay-per-use model of public cloud providers, which can become expensive in a hurry.

Even for on-premises Kubernetes environments with little to no spare capacity, Fission is an economically good fit, as expanding the data center with just additional server capacity (leveraging other data center investments like network and storage) is almost certainly cheaper than a public FaaS service.

Running your serverless alongside existing containers and integrating with container-based tooling – including monitoring, logging and so on – eases the operational burden and simplifies the adoption of a serverless framework. This allows Fission functions running on Kubernetes to use services like message queues and databases running on the same platform.

Most applications are a hybrid of functions and containers. It makes sense to run different components of an application physically near each other, where possible. Not only does this improve latency, it also eliminates the cloud vendors’ notoriously expensive ingress and egress fees.

Freedom of Choice with Kubernetes and Fission

Kubernetes is taking the world by storm, quickly replacing virtualization stacks and IaaS services with container-based approaches. Kubernetes’ deployment experience had a rocky start, being notoriously difficult to install and configure, but most of that has been overcome; many public cloud vendors and service providers now offer a hosted and managed Kubernetes service that negates most of this complexity. Examples include Amazon EKS, Google GKE, and Platform9 Managed Kubernetes.

For functions running on Fission, it’s easy to take advantage of the rich Kubernetes ecosystem and the wide range of data services it supports, like message queues and databases, as well as integrating with underlying infrastructure components for software-defined storage and networking.

Instead of forcing tenants to use specific monetized services, running serverless on Kubernetes provides the option of using free and open source tooling instead. This means running free and open source data services and middleware (databases, message queues, key/value stores), web servers, and more. This is why Fission also integrates with open source projects like Prometheus, which we’ll talk about a little later.

Faster, Easier Development

The Kubernetes-based approach enables the ability to extend Fission’s language and runtime support to anything that runs in a container. Adding a new language is relatively easy; you can create a new container image or modify one of the existing environments to your needs.

On the next posts, we’ll discuss the deployment and operations aspects that make Kubernetes ideal for running Serverless applications.

Can’t wait?

To learn more about Serverless, download the Gorilla Guide: Serverless on Kubernetes.

You may also enjoy

Everything you need to know about Serverless: What does the future hold?

By Platform9

Introducing Platform9 : A better way to go cloud native (Going Cloud Native, Part 2)

By Kamesh Pemmaraju

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

GigaOM’s Radar report for Managed KubernetesRead Now