How To Set Up ALPR (Automatic License Plate Recognition) with Kubernetes to Improve Retail Drive-Thru Customer Experience (Part 2)

In part 1 we covered the growing importance of drive-thru, curbside pickup and personalized service experiences to retailers, particularly given the pandemic, and how Automatic License Plate Recognition (ALPR) can be used to differentiate the customer experience.  In the tutorial below we will focus on applying ALPR in a real world drive-thru quick serve restaurant (QSR) scenario, generating the license plate result.  The approach is applicable to the curbside and automotive scenarios we covered in Part 1.

Tutorial Overview

  1. Install the pf9(platform) CLI on a bareOS or premise virtual machine with Ubuntu – 18.04 to connect nodes to the Platform9 Management Plane
  2. Configure the PF9 CLI with Platform9 credentials and confirm the same for connection on the dashboard.
  3. Create a cluster using the dashboard console.
  4. Download setup for helm charts to install a containerized implementation of the ALPR Repo
  5. Setup dependencies and install helm.
  6. Deploy the architecture on the created nodes.
  7. Testing the results for a video stream of a moving vehicle.

Use Case

This tutorial guides you to create a customized experience for customers of a restaurant using ALPR (Automated License Plate Recognition)/ANPR(Automated Number Plate Recognition) for vehicles in a drive-through scenario, where we get the vehicle license plate recognition implemented in real time.

We will be creating a Kubernetes cluster for in-house or on-premise virtual machines to host the ALPR solution, by creating a cluster that is managed and created using Platform9 managed container deployment.

We chose to use Plate Recognizer ALPR to support this tutorial as they provide a Helm chart for use with Kubernetes and support our use case of ALPR from a video stream. We can even have the frames being processed one at a time in real time in order to ensure that the implementation is able to create an alert for a regular customer for a personalized experience.

Getting started

If you want to follow along with this tutorial all you need is:

  1. A Laptop or virtual machine with internet access
  2. Free Platform9 Managed Kubernetes account.

The second component is a free Tier account on platform9, The Platform9 Managed Kubernetes Free Tier makes it easy to get started. The site will guide you through verifying your account, and you should be ready to go in a couple of minutes.

In this tutorial, we will be using a physical virtual machine, running Ubuntu 18.04 (we can easily do this setup on a laptop as well). The machine I am using for this setup is running on the following configuration

  • 4 VCPUs
  • 16 GB RAM
  • 300 GB HDD

Install PF9 Cli on this node, if you are facing new challenges in setting up the single node cluster using pf9 cli, you can follow the Quick Setup Guide, it has instructions for configuring instructions, we need to follow the BareOS instructions to make connect the node to pf9 dashboards.

The node will be visible after the pf9 setup completes, you will see an output like the following after navigating to the dashboard from your profile.

anpr

Creating a cluster

After the node is configured and ready we are ready to start working with this single node. The next step for us is to create a cluster, we will be using this single node cluster to start our deployments.

To create a cluster from the dashboard we just need to click on the Add cluster button as shown in the image below.

creating cluster

On the subsequent screen, we select what kind of host we are going to use to create this cluster. In this tutorial, we will select a physical cluster and then click on one click cluster as shown in the image below

managing cluster

We will get all the monitoring pods to come up automatically, your dashboard will populate automatically with deployments that monitor the health of the cluster and pods, your dashboard will look like the following

health monitoring

Setting up dependencies

Now we will be installing the required dependencies on the physical server, we need to have the following dependencies:

For helm deployment, you will need to get the token and secret for deployments from ALPR, the procedure for the same is outlined on the readme, you can request these tokens.

In order to properly monitor the new framework, we will be deploying the anpr repo in the anpr namespace, we can create a namespace using the UI itself, from the dashboard navigate to the workloads tab, and select namespaces. We can add a new namespace by just tapping the new namespace button and create a new namespace named anpr.

creating namespaces

Following the same, we are only required to install the helm charts with the required credentials, namely, TOKEN and LICENSE_KEY. You can get these credentials by signing up to Free Trial ALPR. Make sure to request to enable Kubernetes on the ALPR software.

The complete command to install the repo and create the deployment from inside the helm charts repo for ANPR is as follows.

sudo helm install --namespace anpr platerec-sdk platerec-helm/ --set TOKEN=7163441bac21aafe860h30de1ff2815e3c9ad471116 --set LICENSE_KEY=rsUVJKLWIAa

(The key and token in the command above is a placeholder and only represent how the token and key will possibly look like)

Testing the results and real-time deployment

Once the helm installation completes we need to make sure that the pods for our anpr namespace come up correctly, to ensure the same we can navigate to pods and simply select the namespace anpr from the dropdown to make sure that the pods are up and running and in a healthy stage, pods in stage running stage will come up as shown below, we can use it to monitor the health of the application and create alerts when the state of the pod changes.

Monitoring_pods_namespaces

In order to test the local deployment on a physical virtual machine, we need to first forward the cluster port to localhost so we are able to send a curl request to the ALPR deployment in order to access the deployment service on localhost. For our deployment we can simply do that by

sudo kubectl -n anpr port-forward service/platerec-sdk-platerec-helm 2016:8080

Now that the ports are exposed we are able to curl the service on the outside port with a simple curl request, to test we will be using a frame from our video and receive back the response from our deployment.

Inference options

We can opt for live streaming of the videos from a CCTV or any other RTSP stream, in our example we will be using a file stored in a system and reading the same with OpenCV to send the frames for processing into the API we created on a physical server on-premise. The name of the file, in our case the same being “video.MOV”, can be read and shared with the server in real-time by using the following snippet.

To simulate RTSP stream from an existing file we can make use of the following snippet:

#!/usr/bin/env python

import sys
import gi

gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import Gst, GstRtspServer, GObject, GLib

loop = GLib.MainLoop()
Gst.init(None)

class TestRtspMediaFactory(GstRtspServer.RTSPMediaFactory):
    def __init__(self):
        GstRtspServer.RTSPMediaFactory.__init__(self)

    def do_create_element(self, url):
        #set mp4 file path to filesrc's location property
        src_demux = "filesrc location=video.MOV ! qtdemux name=demux"
        h264_transcode = "demux.video_0"
        #uncomment following line if video transcoding is necessary
        #h264_transcode = "demux.video_0 ! decodebin ! queue ! x264enc"
        pipeline = "{0} {1} ! queue ! rtph264pay name=pay0 config-interval=1 pt=96".format(src_demux, h264_transcode)
        print ("Element created: " + pipeline)
        return Gst.parse_launch(pipeline)

class GstreamerRtspServer():
    def __init__(self):
        self.rtspServer = GstRtspServer.RTSPServer()
        factory = TestRtspMediaFactory()
        factory.set_shared(True)
        mountPoints = self.rtspServer.get_mount_points()
        mountPoints.add_factory("/stream1", factory)
        self.rtspServer.attach(None)

if __name__ == '__main__':
    s = GstreamerRtspServer()
    loop.run()

Code snippet shifted below for better formatting

Connecting to a Video Source

In the real world scenario we could connect to an IP (internet protocol) camera capable of taking images and sending them through an IP network. Most IP cameras will be able to send images and video stream to a Network Video Recorder (NVR) or server.

import cv2,os 
import requests


############### SET THESE VARIABLES ACCORDING TO YOUR HOSTING ##########
URL = "http://192.168.1.11:2016/alpr"


#creating a new folder for storing temporary files
os.makedirs("all_frames",exist_ok=True)


# this is where we send the updates from the response coming in real time from the ALPR hosting
def process_response(response):
    Number_plate = response['results'][0]['plate']
    car_type = response['results'][0]['vehicle']['type']
    region = response['results'][0]['region']['code']
    print(f"Vehicle with number Plate :: --{Number_plate}-- detected")
    print(f"Vehicle is of type :: {car_type}")
    print(f"Vehicle belongs to region :: {region}")

def send_request(frame_path):

    files = {'upload': open(frame_path,"rb")}
    r = requests.post(URL, files=files)
    return r


# Reading the relevant video file
cap = cv2.VideoCapture("video.MOV")
count = 0
while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()
    # writing to disk is not the most optimal solution, only reproduced here for better understanding and ease
    file_path = f"all_frames/frame_num_{count}.png"
    cv2.imwrite(file_path,frame)
    resp = send_request(file_path)
    if resp.status_code == 200:
        process_response(resp.json())
    count+=1

You will see an output of the type:

time required for each frame ::  0.25281453132629395
Vehicle with number Plate :: --6zqp411-- detected
Vehicle is of type :: SUV
Vehicle belongs to region :: us-ca
time required for each frame ::  0.2795543670654297
Vehicle with number Plate :: --6zqp411-- detected
Vehicle is of type :: SUV
Vehicle belongs to region :: us-ca
time required for each frame ::  0.28991127014160156
Vehicle with number Plate :: --6zqp411-- detected
Vehicle is of type :: SUV
Vehicle belongs to region :: us-ca

Inference speed for this solution

Although the inference speed depends on the base system that we have the application deployed on the local system, we can get better inference speed with better CPU or perhaps even a GPU deployment to get faster inference speed. For our implementation on the machine with 4 cores we were able to achieve .279 per frame or alternatively 5-6 FPS. Therefore it is possible to use this solution in real time for our use case of it being used for a drive through restaurant and alerting staff to the approach of a known vehicle, therefore making it possible to give a customised experience to a regular customer.

We can further modify this code to send alerts.

In the code above we will only need to make this slight change to simulate it sending requests to a flask app, our modified code will look like the following

regular_customers = ["6z0p411"]
# To keep track of detections
time_data_store = {}
# Setting default values
for each_cust in regular_customers:
    time_data_store[each_cust] = [0, time.time()]


def send_alert(Number_plate, time_data_store=time_data_store):
    requests.post("http://localhost:9000/alert_system",
                  json={"vehicle": Number_plate})
    time_data_store[Number_plate][0] = time_data_store[Number_plate][0]+1
    time_data_store[Number_plate][1] = time.time()


def process_response(response, time_data_store=time_data_store):
    Number_plate = response['results'][0]['plate']
    car_type = response['results'][0]['vehicle']['type']
    region = response['results'][0]['region']['code']

    if Number_plate not in regular_customers:
        return

    if time_data_store[Number_plate][0] == 0:
        send_alert(Number_plate)
    else:
        if (time.time() - time_data_store[Number_plate][1]) > 60:
            send_alert(Number_plate)

The flask app receiving alerts from this implementation can be created with this small snippet:

from flask import Flask, request
from termcolor import colored, cprint
import time

app = Flask(__name__)
@app.route("/alert_system", methods=["GET", "POST"])
def index():
    v_plate = request.json['vehicle']
    cprint(f'\n REGULAR CUSTOMER ALERT ::: Vehicle number ::: {v_plate}  \n',
           'white', "on_green",
           attrs=['blink', 'bold'])

    return ""

app.run(host="0.0.0.0", port=9000)

Conclusion

We can easily deploy an ALPR service with Platform9 to create an on-premises solution that can be used to generate alerts to restaurant systems for menu personalization, pay-by-plate and order automation.

You may also enjoy

Edge Computing and 5G Power: Telco and Cloud Convergence

By Sirish Raghuram

Secure your clusters with Platform9 Managed Kubernetes

By David Dieruf

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Upcoming Workshop: 0 – K8s in 2-hours | Thursday, Dec. 2nd at 11:30 AM PTRegister Now