This is an excerpt from The Gorilla Guide to Serverless on Kubernetes, written by Joep Piscaer.
You can download the full Gorilla Guide here.
Even though serverless feels like the hot new thing, it’s actually not new; it’s been around for about five years. Node.js is the predominant language in the field, but Java, Go, Python, and C# are also popular. Different platforms provide different ways to invoke other languages indirectly, too.
As you might expect, all big public cloud vendors have a serverless play: Amazon has Lambda, Google has Cloud Functions and Microsoft has Azure Functions.
The landscape is much larger than just the big service offerings, though, as we can see below. There are many frameworks, cloud services, and on-premises platforms available, and the landscape is evolving quickly. This gives you choices for building out your serverless infrastructure.
Let’s start with an overview of the ones you’re most likely to know about, the public cloud players.
Amazon launched Lambda in 2014, which was the first commercially available serverless platform. It’s part of the Amazon Web Services (AWS) cloud computing portfolio, and is tightly integrated in that ecosystem. It runs in the AWS cloud, with no option to run on-premises and only limited options for running locally on a developer’s machine. Future offerings may include the ability to run Lambda functions closer to the edge. There’s also a serverless database option, called Aurora Serverless.
Lambda functions can be triggered by numerous events, including:
- Database changes
- File and object storage changes
- Messages in a publish/subscribe queue
- HTTP requests (via an API Gateway)
This is only a sample, as there are many more. Use cases include image processing and object uploads to S3, updates to DynamoDB tables, responding to website clicks, or responding to sensor data from IoT-connected devices.
Lambda is backed by performance objectives. AWS’s goal is to start a Lambda instance within 100 milliseconds of an event, but there are limits to the total duration of a function; it’s currently capped at fifteen minutes. Although Lambda functions are elastic and scale automatically, they’re also limited to 1,000 concurrent executions by default in a given region, per account. This limit can be easily reached, especially when combining production and testing.
Microsoft Azure Functions
Microsoft’s Azure Functions is a relatively young service, but very similar to Lambda. Since it’s part of the Azure ecosystem, the underlying infrastructure runs Windows, not Linux. Besides some unique language support (C#, F#), there are two major selling points for Azure Functions.
First is the ability to run on Azure Stack, which runs in the data center. This puts Azure Functions much closer to existing on-premises workloads, which many enterprises still run (and will run for years to come). This makes Azure Functions a great use case for serverless developers in those organizations that run the majority of their workloads on-premises.
The second important distinction is tight integration with Visual Studio, Microsoft’s Integrated Development Environment, or IDE. This integration offers the ability to debug functions locally from a cloud-triggered event. As any developer will recognize, being able to breakpoint a remotely running function is very useful.
Azure Functions also offers the ability to keep functions in hot standby, mitigating cold startup latency problems. Otherwise, functions have the same type of runtime limitations as AWS (five minutes, by default) and concurrent executions (200 concurrent executions per function in a region).
Google Cloud Functions
Google’s Cloud Functions (GCF) is the newest service of the three big cloud providers, although the PaaS-like App Engine has been around since 2008. The biggest difference between GCF and the others is its trigger support, which is focused on Google’s Pub/Sub messaging bus, the de facto standard for inter-service communication in the Google world.
In many ways, GCF’s very similar to Lambda, but it remains a fairly simple alternative. Google has a Firebase-integrated version of GCF to cater to mobile backend developers.
Besides the “big three,” there are other serverless options, broadly divided into three categories:
IBM and Oracle have FaaS services in their public clouds, too. These are similar to the three we’ve discussed, but aren’t as widely used. There are also a number of vendors in the twelve-factor camp, like Auth0, that offer serverless frameworks and services.
A number of edge computing specialists, like CloudFlare, have begun to offer FaaS services at the edge. These are aimed at use cases that need close proximity to users and devices, like IoT and web. Lambda has a similar offering called Lambda@Edge, running in the CloudFront content delivery network.
Rather than being a managed service, these frameworks fall under their own umbrella. They offer freedom on where to run and are generally not priced in via the consumption model. They are mostly free and open source, or at least, not tied to a cloud vendor’s ecosystem. These frameworks are infrastructure-agnostic and can be used as a building block by a service provider or as part of an existing on-premises technology stack. These run on top of container platforms like Docker and Kubernetes. Examples of these include OpenFaaS, serverless.com, and Fission.
The frameworks have several key advantages over the commercially available services, including greater control over both the infrastructure it runs on and pricing. In the next chapters, we will put the spotlight on one particular framework and the advantages it can provide you in your serverless journey: Fission, from Platform9.
On the next posts, we’ll dive deeper into Fission and some key use cases for real-world applications.
To learn more about Serverless, download the Gorilla Guide: Serverless on Kubernetes.