Read on to learn why serverless on K8s is a match made in heaven for modern application delivery.
Serverless and Containers: Friends or Foes
Serverless versus containers: the tech industry has pitted serverless and containers against each other. We are constantly reminded of the scalability of serverless. We are constantly reminded of the scalability of containers. We are reminded how serverless scales so well. We are reminded how Kubernetes scales so well. Are they talking about the same thing?
The serverless movement and the container movement share the same vision of simplifying deploying and managing your application. You are promised peace of mind and endless scalability. Both accomplish it using the same fundamentals of building an orchestrator that simplifies the management and deployment of software applications and reduces cost.
Serverless and Containers are friends. Did you know that your serverless functions are already running on containers behind the scenes? Did you know that there is an orchestrator behind the scenes that’s usually running your serverless functions?
More specifically we can think of serverless functions and Docker containers running on Kubernetes as best friends. Both simplify your life and allow you to build applications at a high abstraction level so you can focus more on developing your applications and less on the infrastructure underneath.
Microservices or Serverless Functions: Leverage the right resources for your workload
When it comes time to picking how to architect your application, there are a few things to consider: How big is the workload? How often does it run? How big is the functionality you are building?
The main difference between container based microservices and serverless functions is how long they run. If the code you are adding is smaller components, you might prefer serverless functions (rather than API-based-microservices). Functions are convenient for scenarios like transforming data through a real-time analytics stream, tagging images or audio clips, and verifying uploaded user data.
If the component you are building runs infrequently or if your workload is very spiky, serverless functions could be a great option for you.
Do you want to prototype some functionality, deploy it and test it out in minutes? Then serverless functions might be just the right tool for you.
Some serverless solutions also offer Workflows capabilities– allowing you to define the sequence of multiple functions, so you can build complex apps that span many functions.
You can check out some other top uses of serverless functions here.
Serverless on Kubernetes: Putting a Serverless Platform on an Orchestration Platform
Using containers with Kubernetes as the orchestrator allows for great flexibility in application development. Kubernetes provides a platform that is uniform, scalable, and easy to manage. Kubernetes provides a great foundation for running any kind of workload efficiently and serverless functions can be one of those workloads.
You should be able to run your serverless functions along your Docker containers, in one place, managed by one thing. You should be able to leverage economies of scale of your Kubernetes cluster. You should be able to run both serverless functions as well as your microservices in one place, have them communicate easily with minimal overhead.
Start writing and running serverless functions on Kubernetes using Fission. Fission is an open source, Functions as a Service, serverless framework for Kubernetes.
Fission enables you to code serverless functions in any language, and have them run wherever you have a Kubernetes cluster: in the public cloud, in your own data center, or even on your laptop. Fission automatically manages the infrastructure for you. There’s no need for in-depth knowledge of Kubernetes at scale, no containers to build or registries to manage.
With Fission, you can get started quickly and get the most out of your k8s cluster, benefiting from speed and cost savings of serverless – on any environment.
Why Serverless on Kubernetes with Fission
Accelerate time to value with Kubernetes
Fission allows you to quickly develop Kubernetes-based apps with no need for Kubernetes expertise, let-alone expertise in managing Kubernetes at scale.
Focus on Code, Not Plumbing
Just write your code, Fission will make it run on Kubernetes. Fission supports over 10 languages and 5 trigger types and counting. Robust dev features – such as live-reload, record-replay, automated canary deployments, Prometheus integrations, and more – allow developers to move fast while improving the quality of your code from the start.
Run your “Lambda-like” service anywhere: On-premises, too!
Since Fission is open source and works on any Kubernetes cluster, it enables enterprises to easily run a Lambda-like Serverless service on any infrastructure – using either the cheaper instances on public clouds or leveraging their own data centers.
Fission lets you benefit from the abstraction of Serverless, the speed, and cost saving — with full control of your code, integrations, services, and the infrastructure where you deploy. You’re not locked-in to a specific cloud provider, and you can even run on-prem (enabling developers to take advantage of serverless on private datacenters, while improving infrastructure utilization and lowering costs.) Running on-prem is also critical for certain industries – such as financial services – that require high security and clear isolation of their workloads.
Deploy Multi-cloud & Avoid Cloud-Provider Lock-In
With zero lock-in, Fission allows you to stay cloud agnostic. You can easily deploy your serverless functions across all clouds or port them between environments. Your developers don’t have to worry about how AWS Lambda works differently from Google Cloud Functions, and so on.
Simple, Inexpensive, Low Maintenance
Fission offers fine-grained cost and performance optimization controls. You can configure specific CPU and memory resource usage limits, set up rules for autoscaling, and set min/max parameters for your serverless functions. You can do the same thing for your containers, right? Everything is easy and you can think of your resources in the same way. Maximize your resource utilization and increase the efficiency of your workloads – also for on-prem environments.
Same Tooling, Same Infrastructure
When your serverless functions run on Kubernetes, you can leverage the same tooling that you use in your Kubernetes ecosystem. This means Istio, ConfigMaps, Secretes, or Persistent Volumes. This also means logging, metrics, aggregation, authentication, and authorization. You can manage how you roll out deployments, how to protect against brownout for your microservices and your functions the same way. Prometheus integration? Fission has you covered. Canary Deployments? Fission has you covered. Don’t have to learn the tool twice. Use the same tools throughout your entire infrastructure.
Fission Workflows make it easy to build complex apps that span many functions
Fission Workflows make it easy to build complex serverless applications that span many functions. Fission Workflows is an open source framework that allows you to orchestrate a set of serverless functions without directly dealing with networking, message queues, etc.
Is the future containers or is the future serverless?
There is only one winner in the future and that’s you the engineer. The future is where engineers get to pick the right level of abstraction for their job without having to worry about cloud-providers, vendors, etc. The future is a place where engineers can easily code, deploy, and operate applications quickly. Sometimes that means writing serverless functions. Sometimes that means running microservices in Docker. What matters is that you have the power to make the right choice yourself. The platform you use should support you in doing so.
My favorite Fission Features
Live reload: Test as you type
Serverless functions are great for prototyping. You write the code and easily deploy it to test it. You can also get your code compiled, run, and tested as you type. Live reload is a truly revolutionary feature because engineers can prototype, develop, and test their ideas in a matter of minutes. Can any cloud provider do that?
With Live-reload, Fission automatically deploys the code as it is written into a live Kubernetes test cluster, and allows you to toggle between your development environment and the runtime of the function, to rapidly iterate through their coding and testing cycles. This allows bugs to be found and fixed earlier in the application development lifecycle. It also simplifies and increases the fidelity of integration tests, by running them earlier in the process using a live environment that’s consistent with the configuration and related services (i.e. database, API calls, etc.) that are used in Production.
Record Replay: Easier testing and debugging
Fission offers Record-replay functionality for easy testing and troubleshooting. This is very handy for debugging problems in production, for reproducing problems, and verifying they are fixed. It’s easy to start recording traffic (or a subset of it) and use that to ensure function is running. You can use that for your integration or load testing. Can any cloud provider do that?
Cost Optimization in Fission
To allow IT to maximize infrastructure utilization on both public cloud and on-premises resources, Fission offers fine-grained cost and performance optimization controls. Serverless functions can be configured with specific CPU and Memory resource usage limits, minimum/maximum number of instances and auto-scaling parameters. These parameters allow for functions to be tuned with an execution strategy that best fits the specific business use case. Functions can also take advantage of Fission’s pre-warmed container pools, which aggregate the cost of pooling over a large number of functions while providing the performance benefits for all of them.
Try Fission Out:
Fission: An Open-Source, Kubernetes-native Serverless Framework
With Fission developers can easily code, deploy, and operate Serverless applications that are production-ready from the get-go. They can run anywhere: on-prem and in the public cloud.
If you are interested in learning more about how Fission works and to get started – check out the site.
More on Software Engineering Daily:
Check out the two SE Daily episodes we did with Soam Vasani on Fission:
- Serverless on Kubernetes with Soam Vasani discussing the open source framework
- Fission: Serverless on Kubernetes with Soam Vasani discussing improvements that were made to Fission since the first episode.
- Why and How to Run Machine Learning Workloads on Kubernetes - July 23, 2021
- Learning About Kubernetes Admission Controllers and OPA Gatekeeper - July 20, 2021
- Fargate vs. Kubernetes: Which Container Orchestration Solution Do You Need? - July 19, 2021