An introduction to Flux – Part 2: Capabilities and ecosystem

This blog is part 2 of our introduction to Flux. In part 1, we walked through the history of Flux, its graduation as a CNCF project, and dove into the core features of Flux as they fit into the evaluation criteria. In this blog, we’ll take a deeper look at some of Flux’s unique capabilities, ecosystem plug-ins, and more.  

Flux Unique Capabilities 

Write to Git 

Compared to ArgoCD and Terraform, Flux deviates in one very significant way; it’s able to write to Git. Flux is able to update a managed resource in Git and then apply it to the cluster that it’s running within! This is unique – neither Terraform nor ArgoCD currently contain features that can help automate a closed-loop GitOps process. Flux achieves this through two services, Image Automation Controller and Image Reflector Controller. Together these two services can detect a change in a container image and update a running resource by changing the object in Git.  

Why would you want to automate container image versions?  

The final step in deploying a new image using GitOps is to update the manifest in Git. This could be achieved using a multi-step workflow, something like ArgoCD Workflows. Flux removes the final step by maintaining the image tag automatically based on changes to the container image that is running! Rest assured; this feature can quickly be disabled during incidents so that you don’t need to fight with Flux replacing your running container image when you don’t want it to. You can read more on the image update and suspension feature here:   

In-Line Patch 

The journey to cloud-native is long and requires significant transformation and the adoption of new tooling, of which significant portions may reside in existing 3rd party Git repositories, including deployment manifest. This scenario presents a decision point; clone and continuously clone forever after or deploy using the vendor/3rd party provided manifests unchanged and manage the potential performance and availability implications. Fortunately, Flux provides a third alternative: ‘in-line patching’.  

An existing 3rd party manifest can be utilized directly from the provider’s repository by creating a Flux Kustomization that references the requisite repository and Git branch. The resulting Kustomization can then be updated with your required changes by modifying the YAML and adding an ‘in-line patch’ reference. 

In-line patching allows you to maintain your unique customizations of a deployment, such as Limits, Replicas, and Ports, without the need to continuously manage a cloned or forked repository. However, it does not remove the need to continuously test upstream changes to the 3rd parties branch or requisite container image. When you’re using in-line patching to implement changes to a 3rd party hosted manifest, remain vigilant and perhaps disable automatic synchronization and implement an approval process that makes use of Flux Notifications.   

Learn more about in-line patching here:  

Core Features 

Diff and Previewing Changes  

Similar to Terraform Plan, Flux has a built-in pre-flight command that allows you to assess the change or view the ‘diff’ between an object in Git and on the cluster. A brief overview can be viewed here:  

Additionally, the flux diff command takes two file paths as arguments and compares the contents of the files. It prints the differences between the files to the standard output. The flux diff command can also be used to compare the contents of two directories. To do this, you would specify the directories as arguments to the command. 


Inbound and outbound notifications are handled by a single Flux service known as the notification controller. The controller is involved in critical workflows such as reacting to a change in a Git repository and the subsequent update of the resource running inside the cluster. The service (Receiver API) has built-in support for: 

  • Github 
  • Bitbucket 
  • GitLab 
  • Harbor  
  • DockerHuzb 
  • Quay 
  • Google Container Registry  
  • Nexus 
  • Azure Container Registry  

Notifications are not limited to automating synchronization; they can also be used for outbound action. The outbound capabilities can be implemented to send notifications on any Flux event, including when a resource is found to be out of sync with its source. A key use case where an outbound notification on drift can be leveraged is development clusters. You may want to deploy a baseline but not limit what users can change and instead opt for a review process based on notification of drift. The notification captures the changes and then subsequent reviews can update the baseline where needed.  Built-in support (Provider API) includes: 

  • Webhook 
  • Slack 
  • Rocket 
  • Discord 
  • Lark 
  • Grafana 
  • Alertmanager 
  • Webex 
  • Telegram 

Helm Support 

Flux has invested heavily to ensure that Flux is not ‘inflating’ Helm Charts, but truly implementing them such that they’re deployed into a cluster as an actual Helm deployment. The flux team have a great explanation here: To quote:  

“Helm Controller (and its predecessor, Helm Operator) stand alone in the GitOps world as Go client implementations of the Helm package library.” 

The Helm controller is a Helm client, capable of acting, and in a manner that is directly comparable to a human (or Jenkins custom code) using the Helm CLI.  

“While there are many projects in the GitOps space that can perform “Helm Chart Inflation” which can also be explained through the activities of merging values from different sources, rendering templates, applying the changes to the cluster, and then waiting to become healthy, other projects usually cannot claim strict compatibility with all Helm features. Helm Controller boasts full compatibility and reliably identical behavior in Flux with all released Helm features.” 

This is important as it allows Flux to leverage Helm charts directly, including Helm-based features, and also helps avoid tool lock-in (lock-in can occur when a CD tool transforms the Helm chart into a secondary object). By implementing Helm directly, you could, for example, back up a Helm-based database deployment as part of a Helm release upgrade or, leverage a Helm Chart Test to simplify liveliness checks.    

A great guide for Helm users has been put together by the Flux team, And Pavan Kumar has a short getting started guide for Flux and Helm, check it out here:  

Plug-ins / Extensions  

Flux has adopted the approach of fostering a community that extends Flux, whilst the maintainers focus on the core problems. This is similar to the core Kubernetes project that intentionally jettisons features that are not core and instead fosters a community of projects through interfaces and the Custom Resource Definitions.  

Flux Terraform Integration  

Flux has a communitybuilt Terraform integration that can help mitigate two of Terraform’s most significant shortcomings; concurrent user access and state file management. The Terraform plug-in achieves this by leveraging Git as the source of truth and removes the need for users to invoke Terraform commands which, in –turn, obfuscates the Terraform state file. This innovative approach greatly expands the versatility of Flux and simultaneously fixes some of the drawbacks of using Terraform to manage Kubernetes clusters. That said, why would you use Flux and Terraform together? There are two main reasons: cluster inception and adjacent cloud resources 

The inception problem is such that Flux can only manage apps after a cluster is built, so where is your cluster defined, what is building it, and how is its state being maintained? To be 100% GitOps, the cluster’s definition (Cloud, Region, K8s Version, Nodes, CNI, CSI) should have its state stored and maintained in Git, and this should act as the source of truth. Change it, and the cluster is updated; delete it, and the cluster is deleted from your cloud. Standalone Flux cannot achieve cluster creation. It requires Cluster API, Crossplane, or Terraform.

The Terraform provider can be added to an existing Flux deployment and be used to create Kubernetes clusters in clouds such as AWS, Azure, and Google. In doing this, you will move the cluster’s definition into a Git repository, and Flux + Terraform will become the controller for Cluster Lifecycle. With further investment, a Cluster Lifecycle Pipeline can be developed that can automatically connect the existing Flux + Terraform deployment to the newly created cluster. This connection then allows the Flux + Terraform deployment to act as a central control point for both cluster lifecycle and application CD. But not all applications rely on resources that are solely operated within a Kubernetes cluster. They may rely on adjacent cloud services such as RDS.  

Adjacency is a primary benefit of public cloud. If your application requires a MySQL database, just spin up RDS. Need an ML service, turn it on when needed, then shut it down once the inference is complete. Your applications can run in a container and interact with cloud-hosted PaaS or SaaS services. However, if you move to using Flux, then you may be faced with a bifurcated tool set where your clusters and applications are managed by Flux, and your related Cloud Services are managed by an additional Terraform deployment.  

The Terraform plugin for Flux solves this. The plugin allows Flux to act as the controller of Terraform and can therefore leverage any Terraform Provider. So, if your application does rely on a MySQL database, then you may use Flux + Terraform as a single operational platform to manage both the app inside Kubernetes and the AWS RDS instance. The combination is powerful and can be leveraged for holistic platform management. The Flux team has written up an overview of the plugin and provided some examples to help get you started; you can find the resources here:  


The most prominent Flux extension is Flagger, a progressive release tool that enables operations teams to leverage a variety of software release strategies that can help reduce customer-facing outages or test new features. Flagger works with any Kubernetes native CI/CD tool, not just Flux, and is generally implemented alongside Ingress Controllers and/or a Service Mesh for traffic control and monitoring solutions such as Prometheus and Datadog for understanding the health of the release.  

Flagger works by supplementing a Kubernetes Deployment (or DeamonSet) with a Flagger-controlled resource that becomes the “primary” recipient of user traffic. When a change is made to the original Kubernetes Deployment, Flagger automatically creates a canary resource that will receive a portion of the traffic. The new canary deployment is monitored for health, latency, and user-extensible metrics. If the stated positive tests pass, the primary deployment is updated. If the canary fails, the primary is not updated.   

Release Strategies: 

  • Canary 
  • A/B Testing  
  • Blue/Green Deployments  

Flux with Flagger can help you realize some of the true benefits of cloud-native by making advanced software release strategies a reality.   

I hope you found our introduction to Flux useful and informative. Next in the blog series is an introduction to ArgoCD following the same structure, followed by an introduction to Terraform. The final blog in the series will consist of the comparative analysis. Check back soon for the next installment. Missed previous installments? Get started with the first blog in the series, How do you balance product operations and developer productivity?

You may also enjoy

Kubernetes FinOps: Java memory management in containerized environments

By Joe Thompson

Top 8 predictions for FinOps X 2024

By Chris Jones

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

State of Kubernetes FinOps Survey – Win 13 prizes including a MacBook Air.Start Now