Kubernetes PaaS or Not to PaaS

Kubernetes isn’t impossible to learn; not for operations teams and not for the software developers who create the applications that we all use on a daily basis.

Then why do organizations continue trying to create Platform-as-a-Service (PaaS) layers to obfuscate Kubernetes?

Maybe it’s the fear of letting developers access clusters? Maybe it’s a belief that infrastructure can solve application problems? Maybe it’s what we hear from the Kubernetes veterans with five years of experience that went through hell?

The answer is yes – It’s all of this and more. But the truth is, Kubernetes is something that can be learnt, and engaging your development teams and letting them push Kubernetes to its limits is the secret to success.

What is a Kubernetes PaaS?

In relation to Kubernetes, a PaaS would provide a simple way for developers to deploy containerized applications; point, click done.

In reality, a PaaS for Kubernetes would need to expose the secret sauce of Kubernetes in a simple way with a seamless deployment mechanism. Before jumping into the PaaS debate, it’s worth asking, is Kubernetes already a PaaS?

If we agree that it is not, then the challenge, or maybe the expectation, is presented well by Janakiram MSV in his blog on Kubernetes and PaaS

Why do organizations look to build PaaS Layers?

The two most common challenges organizations face are setting resource limits (CPU and RAM) and managing services (how applications are exposed to the world).

Exposing these settings to a developer means they need to learn how they work.

Leave it to your platform teams and they might just guess.

Do nothing and you’ll have outages.

Build a PaaS, then let the PaaS handle it. Developers have nothing to learn, and platform teams can rest easy.

However, as Janakiram points out, there’re additional Kubernetes-specific challenges – such as Storage, Secrets and Config-Maps – that need to be handled. This further incentivizes the pursuit of a PaaS.

Biggest Limitation Any Kubernetes PaaS will Face?

Change and velocity.

A great real-world example is the rise, and then subsequent plateau, consolidation, and pivot of multi-cloud management tools. Solutions like RightScaleEnstratius, Morpheus and Cloudify rose up with much fanfare and then all hit the inevitable; every change made by AWS, Azure and Google forced a related change in their platform. An unending backlog of technical debt was required just to stay relevant, let alone useful, to the end user.

The same holds true for Kubernetes. Any layer that obfuscates, simplifies, or layers on top will need to change with every quarterly release. Want the latest Kubernetes feature? You’ll either need to add it to your home-grown PaaS or wait for the vendor to add support.

Who has tried the PaaS approach?

Adobe and Intuit, to name two.

And then there is also Netflix, Shopify and ‘born in the cloud’ for Kubernetes, Ambassador.

Adobe, Intuit, Shopify and Netflix are large companies that have had almost unlimited budgets. Ambassador is highly specialized. And each have a far-reaching talent pool that started working with microservices and containers over six years ago. Six years that has seen lot of trial and error.

Over time, some of them have moved away from their initial approach. Adobe shared their experience with their Kubernetes PaaS, and Intuit presented at KubeCon 2022, (slides). In 2023, both Adobe and Intuit are leaning into a more Kubernetes native approach and less so PaaS approach.

A PaaS Problem Example: Application Resource Limits

What exactly are users trying to solve with a PaaS?

A good example is the issue of setting application resource limits. Resource limits sounds simple, but in practice is a multifaceted problem that must be solved, and in solving it you’ll need to answer:

  • Who owns creating the limit,
  • Who sets them for production,
  • When are they changed and,
  • Why can’t we take the approach we did with VMs are all real and relevant questions.

Ultimately, getting the limit management wrong impacts your customer and, if you’re in public cloud, your bottom line.

From the early 90’s all the way through to 2020, most software was packaged and deployed on a VM, and that VM had set resources. Ideally users would leverage performance testing results to set the limits, but in all scenarios, they were set and then often “increased” to solve performance issues. I call this ‘throwing infrastructure at a software problem”.

This was also the domain of the VMware admin. I know this because I spent much of my career working to solve organizations’ application performance monitoring problems.

What’s changed and why is it relevant? Kubernetes blurs the line between who owns the infrastructure and who owns setting the resource limits. (Remember, not setting these will ultimately end in failure and outages.) This is important as it’s one of the many critical divergent patterns that Kubernetes forces on organizations.

And no, you can’t take the approach used for Java running on a VM, and no amount or configuration of infrastructure will fix this. Plus, don’t forget, in public clouds, more infrastructure results in hefty monthly bills.

A PaaS Solution: Application Resource Limits

How will a PaaS help?

A true PaaS layer will remove the need to set any limits and remove the need to even ask – think Heroku. To achieve this in Kubernetes you would need to enable HPA & VPA, set Quotas, enable cluster autoscaling, implement a GitOps tool, solve for dependencies, automate service provisioning, and probably end up implementing something like the Shopify Shipit-engine.

But, at some stage, just as Intuit found, your developer is going to need to specify the required resources anyway.

It’s safe to say that introducing new concepts to developers is a common event, otherwise the world would still run on COBAL. The paradox that organizations face is simple; should they introduce developers to the elements that make their software run, and should they be asked to share in the responsibility.

Yes, and yes. Kubernetes has changed the game, and attempting to play with it with yesterday’s tactics will fail.

How does Platfrom9 Help?

Platform9 helps in three specific ways.

  • First, we remove the need for platform teams to spend all their time running Kubernetes. This allows them to focus on more business-critical functions, like cloud spend and helping deliver new projects, and even collaborating with developers on resource limits!
  • Second, we greatly simplify working with Kubernetes by providing all users, operations, and developers a unified UI to work with running resources. Pull logs, edit YAML, view events, and more.
  • Lastly, we provide a built in GitOps engine that is purpose built to help developers work with Kubernetes – ArgoCD. Intuit and Adobe both leverage ArgoCD to improve velocity, and automate and scale their development and production operations. ArgoCD can even be the interface that your development teams use to interact with their clusters.

These three elements and our open-source approach mean that when you run with Platform9 you start immediately with a competitive advantage and run with 24/7 Always-on Assurance.

Keen to learn how this can help your organization specifically? Set up a call with one of our experts. They’ll look at the challenges you’re trying to address and help you understand how to solve for those problems – and how Platform9 can help you do so, faster, easier, and with better piece of mind.

You may also enjoy

Kubernetes FinOps: Resource management challenges

By Joe Thompson

Top 6 FinOps KPIs for EKS  

By Chris Jones

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now