Platform9 Blog

Fission Workflows: Using Serverless For Processing Kubernetes Metrics

For some tasks, running a persistent service isn’t really practical. FaaS technology has largely addressed this need in a variety of ways, and with particular emphasis on integration with Kubernetes. Fission.io is one such FaaS framework that leverages Kubernetes, and can be enhanced and dynamically manage interrelated jobs with Fission Workflows.

In this post, we will cover integration of Fission Workflows into your application. We’re going to create a Workflow that performs two tasks:

  1. The first Fission function will grab data from the Heapster API (in this case, cluster metrics from the kube-system), and return a JSON object that is inserted into MongoDB.
  2. The second will be a task that prepares the MongoDB database values into a consumable body (in this case, populate a graph rendered by Canvas.js).

Installing Fission Workflows

  1. A running Kubernetes cluster or Minikube
  2. Fission and Fission Workflows installed on the cluster.
  3. MongoDB service running in the cluster (we will go into this in a moment)

The Data

Let us first setup MongoDB. For the purposes of this example, we will have a single pod managed by a replication controller resource:

Use the following command to create the Mongo controller and service on faas-demo-mongo.faas-demo.svc.cluster.local:

kubectl create -f mongo-k8s.yaml --namespace=faas-demo

With the database up and running, we can move on to seeding it.

Writing Metrics to MongoDB

We can setup the first task to read metrics from Heapster, return a JSON object of the latest metric, check for its singularity on inserting into a time series, and insert it into MongoDB:

Our next task is to make the data usable by our application. As we will see later on, making the data usable requires x and y values to create a plot.

Reformatting the Data

Let us now create a task script to read the data from MongoDB and format it to meet the needs of this application:

This script will return the last 60 data points with x- and y-axes values.

Creating the Fission Workflow

With these scripts in hand, we now need to do two things: create the fission tasks, and setup the workflow. Start by creating the Python environment:

fission env create --name python --image quay.io/jmarhee/python-env

Next, create the two tasks:

fission function create --name faas-demo-write --env python --code writeMongo.py
fission function create --name faas-demo-return --env python --code dataRead.py --path /memstates --method GET

Deploying these as related services, one with an obvious dependency on the other, is trivially simple with a Workflow:

This Workflow can be launched with the following Fission command:

fission function create --name metricsmgr --env workflow --code ./metricsmgr.wf.yaml

Using Fission Workflows to Consume Task Data

Once the tasks and Fission Workflow are launched, your graph data will be accessible from the Fission router in the Fission namespace:

router.fission.svc.cluster.local/memstates

The above command will return a JSON body with x- and y-axes values.

We can now setup a small Ruby app to consume your Fission endpoint statelessly:

In index.rb shown above, a GET request to the memstats endpoint is used to receive the formatted graph data and populate the graph.

We are now ready to run through a normal container deployment for the application. A container can be created using the following Dockerfile:

Build and push the container using the following command:

docker build -t yourregistry/sample-ui .; docker push yourRegistry/sample-ui

Deploy the container using the following command:

kubectl create -f sample-app.yaml

Conclusions & Additional Thoughts

Though this example uses data of trivial value, Fission and Fission Workflows can be leveraged for a variety of real-world scenarios to push and pull data in a task-oriented, functional way, requiring little operational overhead and developer time investment.

In addition to the example demonstrated above, Fission Workflows can also be used effectively sequential tasks.  An example of sequential behavior comes from the Workflows GitHub:

In the above example, a chain of tasks depend on each other to start the next task (a series of sleeps). Additionally, Fission Workflows can be used for multi-stage processes such as image processing, complex data transformations, and cleaning tasks that might be relatively expensive to complete within a single service call. Workflows also address parallelization that would benefit use cases such as data processing. In such cases, the same data set may require multiple outputs for various pipelines in your applications.

.
.
.
Chris Wright

The browser you are using is outdated. For the best experience please download or update your browser to one of the following: