The Eye of Sauron
The Eye of Sauron

A Scalable Face Recognition System for Surveillance built on top of Kafka using face_recognition module.

License

Key FeaturesStream Processing PipelineHow To UseConfigurationExamplesScaling PerformanceCreditsContact

demo

🎨 Key Features

🔨 Stream Processing Pipeline

pipeline

▶️ How To Use

To clone and run this application, you'll need Git, python3 (also install pip) and kafka (v1.0.0 and v1.1.0 with scala v2.11 and v2.12) (all combinations) installed on your cluster. I used Pegasus for the cluster setup on aws with environment setup modified to this custom setup file.

  1. From your command line (For web app and getting feeds from the camera):
# Clone this repository
$ git clone https://github.com/rrqq/eye_of_sauron.git

# Go into the repository
$ cd eye_of_sauron

# Install dependencies
$ sudo pip3 install -r requirements.txt

# Change permissions
$ chmod +x run_producers.py

# Run the app
$ ./run_producers.py

or

# Run the app
$ python3 run_producers.py

Note: If you're using Linux Bash you might need to convert run files as

$ sudo apt-get install dos2unix
$ dos2unix run_producers.py
$ dos2unix run_consumers.py
  1. From your command line (For consumer nodes i.e. face recognition, or consumption of messages - frames from videos):
# Clone this repository
$ git clone https://github.com/rrqq/eye_of_sauron.git

# Install dependencies
$ sudo pip3 install -r requirements.txt

# Run consumers
$ python3 run_consumers.py

⚙️ Configuration

  1. params.py

    • SET_PARTITIONS to set number of partitions for FRAME_TOPIC and PROCESSED_FRAME_TOPIC, this controls the level of parallelism. Rule of thumb when latency is a key factor, is to keep number of partitions to be less than 100 x b x r where b is the number of brokers in the cluster and r is the replication factor. Multi-partition is good for fault-tolerance, dealing with the scaling (up or down) and reassignment scenarios. If one (or more) of the consumer is stopped during the process, the assignor will take this into account and reassign the non-consumed partitions to valid consumers.

    • ROUND_ROBIN set True if you want to partition messages using RoundRobinPartitioner else Murmur2Partitioner(Better option) will be used.

  2. frame_producer.py

    • StreamVideo class (inherits multiprocessing.Process) is used to process videos from video_path and publish it to a specific topic, here FRAME_TOPIC.
  3. prediction_producer.py

    • ConsumeFrames class (inherits multiprocessing.Process) consumes messages containing encoded frames, timestamped and keyed. Processes each frame (detects faces in the frame specifically their locations, and calculates face encodings) and pushes the result to PROCESSED_FRAME_TOPIC.

    • PredictFrames class (inherits multiprocessing.Process) consumes messages containing encoded frames, detected face locations and encodings. The process waits for User Input i.e. Query or Target faces to look for. Matches detected faces with the query face and publishes the result to respective camera topic, ready to be consumed by steam app for viewing purpose. Here the results can also be pushed to database for analysis.

🐾 Examples

A. 3 CAMERAS

screenshot

B. 6 CAMERAS

screenshot

C. Using the eye for object detection over multiple cameras using pretrained MobileNet-Caffe model.

screenshot

🚀 Scaling Performance

latency

❤️ Credits

This software uses following open source packages.

✏️ Contact

Linkedin  ·  GitHub @rrqq  ·  Kaggle @rrqqmm