Serverless is HOT!
Serverless applications increase developer productivity and time to market, by allowing engineers to focus solely on their code, and freeing them from spending time on infrastructure provisioning, configuration and management.
Serverless also simplifies Operations and reduces infrastructure cost – as the Kubernetes container infrastructure required to run these applications is automatically spun up and scaled precisely with the workload, to optimally handle all runtime requests.
Recent advances in open source technology now allow organizations to run Serverless and Kubernetes reliably, at scale, also on on-premises and private cloud infrastructure. The ability to achieve the benefits of Serverless on existing infrastructure – and not having to rely solely on public clouds – has greatly increased the adoption of Serverless across industries, both for greenfield applications, as well as for established organizations with legacy applications and technical debt.
When we talk about Serverless, we often focus on the speed of development that it offers, enabling developers to create useful applications quickly from scratch without learning a lot about Kubernetes.
Go Faster, Safer
We need to keep in mind though, that in order to realize the benefits of software-driven innovation, these faster-to-develop Serverless applications have to be hardened and be operationalized and reliably managed, at scale.
Operations is largely about reliably managing changes to applications, and making sure these applications are running optimally. This is the responsibility for both Dev and Ops, and spans the entire pipeline- from code check-in, through testing, deploying, scaling, monitoring and on-going optimization.
- How can we enable Dev to produce better-quality Serverless code?
- How can we ensure Ops has visibility into these apps, and confidence in their ability to manage them effectively?
- What are some best practices for optimizing Serverless apps at scale- in your own datacenter and in the cloud?
Go Faster, Safer, at Scale: Serverless Operations
Our own Soam Vasani would be answering these questions in his upcoming webinar with CNCF on September 25.
Soam is the creator of Fission.io – the open source, Kubernetes-native Serverless framework that lets you run your own Lambda-like apps on any environment- on premises or in the public cloud.
In his webinar, Soam will focus on the operational aspects of Serverless applications on Kubernetes, and share best practices for Serverless development methodologies and key functionality provided by production-grade Serverless frameworks that help ensure reliability and stability of Serverless apps.
- Managing deployments using declarative specifications.
- Tracking changes to deployment configuration.
- Using Canary deployments to reduce the risk and impact of application changes
- Managing operational expense: We’ll show how you can make different cost-performance tradeoffs; we’ll discuss what the defaults choices imply, and how to tune these.
- Finally, we’ll demonstrate how to track serverless application performance and issue alerts using Prometheus.
Don’t miss this event to get on the leading edge of Serverless apps, and learn how you can improve the quality and operationalize your serverless code from the get-go.
- Why and How to Run Machine Learning Workloads on Kubernetes - July 23, 2021
- Learning About Kubernetes Admission Controllers and OPA Gatekeeper - July 20, 2021
- Fargate vs. Kubernetes: Which Container Orchestration Solution Do You Need? - July 19, 2021