Fission’s Erwin van Eyk recently spoke at the JAX DevOps conference in the UK. Below is his interview with Jax Enter about what’s in store for Serverless and modern application development.
“We are continuously trying to move to higher levels of abstractions with our applications”, says JAX DevOps speaker, Erwin van Eyk, software engineer at Platform9 and researcher at Delft University of Technology. We asked him to share his thoughts on serverless and the radical change in developers’ lives!
Q.JAX: Serverless as a term is a rather controversial buzzword: Servers are still in use. In addition, everyone seems to understand something different by serverless – for example FaaS or BaaS. So first of all the question: What is serverless for you personally?
Erwin van Eyk: Serverless indeed remains a controversial term to use. On the one hand, many people argue for various restrictive definitions to comprise a specific cloud model (such as FaaS) or very specific constraints (such as “it is only serverless when it is managed by a cloud provider”). On the other hand, others try to stretch the definition of “serverless” to ensure that the buzzword also applies to a certain service, product or application.
Personally, I don’t think an exact definition is either desirable or achievable. The emergence and subsequent controversy behind serverless feel oddly reminiscent of the early days of the previous buzzword: “cloud computing”. Back when AWS and other key players popularised that buzzword, there was a lot of controversy around its use. What exactly is cloud computing? When is a service a cloud service, and when is it not? If you look back, various strict definitions were proposed, but we never settled on a clearly delineated definition. Take, for instance, the current cloud computing definition by Merriam-Webster: “The practice of storing regularly used computer data on multiple servers that can be accessed through the Internet” – if that is not a broad, vague definition, I don’t know what is.
This is not to say that we don’t need a bit more structure in the definition of what serverless constitutes. To introduce a bit more structured definition, I, together with an international team of researchers and industry professionals, proposed a definition of serverless computing based on three key characteristics:
- Granular billing: The service only bills the user for actual resources used to execute business logic. For example, a traditional VM does not have this characteristic because users are billed hourly for resources regardless of whether they are used.
- Minimal operational logic: Operational logic, such as resource management, provisioning, and autoscaling, should be delegated as much as possible to the cloud provider.
- Event-Driven: User applications (whether they are functions, queries, or containers) should only be active/deployed when they are needed; when an event triggers the need for it.
With this definition, FaaS, BaaS, and even some SaaS services are part of the serverless computing domain. If an application, product or service adheres to these three characteristics, call it serverless by all means.
Q.From a developer’s point of view, serverless has many advantages, one of which is that you practically don’t have to worry about the infrastructure anymore. In your opinion, how does Serverless change the everyday life of developers?
There seems to be a lot of talk about how serverless will radically change the lives of developers. However, serverless computing is simply the next step in the perpetual trend in cloud computing. We are continuously trying to move to higher levels of abstractions with our applications, bringing programming closer to the business domain and away from the operational details.
With serverless, we simply continue this trend of higher levels of abstraction. Cloud computing provided us with an abstraction layer over bare-metal machines. Serverless computing takes the next step, abstracting away virtual machine logic for developers.
I believe that the concepts behind serverless computing, despite all the buzzwords, are here to stay, and will continue to trend towards higher-level abstractions for cloud computing.
Q.Developers aren’t the only ones affected by the new model, especially when you think of DevOps: What are the consequences of the serverless approach for Operators/System Administrators?
While serverless computing is the fastest growing part within the cloud ecosystem and its promise is clear, it is too extreme to state that it will make all other cloud models obsolete. Use cases will remain that require the use of bare-metal machines rather than serverless services. However, existing cloud models and technologies are moving and will move further towards serverless. For example, many projects, such as service meshes (e.g., Istio, Linkerd, etc.) and FaaS platforms (e.g., Knative and the open source Fission) have emerged to further abstract away operational details of containers and Kubernetes. Instead of becoming obsolete, Kubernetes holds a nice promise of a unified ecosystem in which serverless/FaaS services coexist with more traditionally deployed services.
Q.Serverless is all about scalability and the associated costs: Is it still worth using your own servers today, or is the price/performance ratio of Serverless unbeatable?
The price/performance ratio of serverless platforms is certainly not unbeatable. The internet is full of one-to-one comparisons between serverless and non-serverless alternatives. For example, if you compare the pricing of AWS Lambda to deploying the same service on an EC2 instance, you will find that as you increase the number of function executions the cost of the FaaS function quickly exceeds that of renting a VM.
Yet, even at such discounts one of the key costs remains: operating costs. Deploying a couple of VMs to run your application is only the start. After the deployment you will have to monitor your application; keep the machine, middleware, and application up to date; scale the number of VMs up and down according to demand (or monitor the autoscaling policies that do so for you). This maintenance and support will consume the costly time of already scarcely available DevOps engineers.
Still, there are cases in which it is more cost-effective to deploy these applications in traditional close-to-metal cloud models. When the workload is large, relatively stable, or performance is critical, it can be cheaper and safer to have on-prem dedicated resources.
But, in most cases, the microservices that we write are not that performance-critical, nor do they have a consistent, large workload. In such cases, we can save costs by deferring the operations to a cloud provider. The cloud provider (which can simply be an internal operations team, running a self-managed FaaS platform, such as the open source Fission can benefit from the economies of scale, multiplexing the serverless applications on fewer machines, while providing the applications with extensive autoscaling.
Q.Finally, a brief look into the crystal ball: What role will Serverless play in 2020?
Serverless will play an increasingly important role in how we use and think about distributed cloud applications. Although too many challenges and opportunities in serverless computing to describe, I believe on a user-level we see the following trends develop further:
- The tooling around (open-source) serverless platforms will improve to allow more complex serverless applications. Systems that allow for the composition of serverless functions and services, such as workflow systems, will mature. This will encourage the reuse of existing functions, and simplify the implementation of cross-cloud operations.
- The emergence of edge computing will further amplify the popularity of serverless computing. The on-demand, lightweight, and ephemeral nature of FaaS functions, for example, makes them ideal candidates to serve workloads at the edge.
- New programming models will emerge that leverage serverless computing to help developers to define their distributed applications.
Overall, I believe that the concepts behind serverless computing, despite all the buzzwords, are here to stay, and will continue to trend towards higher-level abstractions for cloud computing.
This interview originally appeared on Jax Enter.
- Why and How to Run Machine Learning Workloads on Kubernetes - July 23, 2021
- Learning About Kubernetes Admission Controllers and OPA Gatekeeper - July 20, 2021
- Fargate vs. Kubernetes: Which Container Orchestration Solution Do You Need? - July 19, 2021