An example of using Redis Streams, RedisGears, RedisAI, and RedisTimeSeries for Real-time Video Analytics (i.e. counting people).
Given this input video, the final output looks something like this:
This project demonstrates a possible deployment of the RedisEdge stack that provides real-time analytics of video streams.
The following diagram depicts the flows between the system's parts.
The process is a pipeline of operations that go as follows:
See "My Other Stack is RedisEdge" for a wordy overview. Watch "RedisConf19 Keynote Day 2" for a video demonstration.
The RedisEdge stack consists of the latest Redis stable release and select RedisLabs modules intended to be used in Edge computing. For more information refer to RedisEdge.
You Look Only Once, or YOLO for shorts (good overview), is an object detection neural network. This project uses the "tiny" YOLOv3 model.
Prerequisites:
$ git clone https://github.com/RedisGears/EdgeRealtimeVideoAnalytics.git
$ cd EdgeRealtimeVideoAnalytics
$ git lfs install; git lfs fetch; git lfs checkout
Refer to the build/installation instructions of the following projects to set up a Redis server with the relevant Redis modules. This application's connections default to redis://localhost:6379
.
Note that you'll also need to install the Pythonic requirements.txt
for the embedded RedisGears Python interpreter. Here's how:
apt-get install python-opencv
)
python2
import cv2
- if this succeeds then OpenCV is installedcv2.__file__
- you should get something like '/usr/lib/python2.7/dist-packages/cv2.x86_64-linux-gnu.so'/opt/redislabs/lib/modules/python27/.venv/lib/python2.7/site-packages
ln -s /usr/lib/python2.7/dist-packages/cv2.x86_64-linux-gnu.so cv2.so
/opt/redislabs/lib/modules/python27
pipenv install numpy pillow
Refer to the build/installation instructions of the following projects to set up Prometheus, Grafana and the RedisTimeSeries adapter:
See below on how to run a partially-dockerized setup that circumvents the need to install these locally.
The application is implemented in Python 3, and consists of the following parts:
init.py
: this initializes Redis with the RedisAI model, RedisTimeSeries downsampling rules and the RedisGears gear.capture.py
: captures video stream frames from a webcam or image/video file and stores it in a Redis Stream.server.py
: a web server that serves a rendered image composed of the raw frame and the model's detections.top.py
: prints runtime performance metrics. Optional (i.e. to be run manually).gear.py
: the Redis gear that glues the pipeline.To run the application you'll need Python v3.6 or higher. Install the application's library dependencies with the following - it is recommended that you use virtualenv
or similar:
$ virtualenv -p python3.6 venv
$ source venv/bin/activate
$ pip install -r app/requirements.txt
The application's parts are set up with default values that are intended to allow it to run "out of the box". For example, to run the capture process you only need to type:
$ python capture.py
This will run the capture process from device id 0.
However. Most default values can be overridden from the command line - invoke the application's parts with the --help
switch to learn of these.
Prerequisites:
The following will spin up a fully dockerized environment:
$ docker-compose up
Alternatively, you can bring up a lean environment (no fancy UI) with:
$ docker-compose up redisedge init capture server
For performance gains, a local Docker composition that includes only the app's initializer, server, grafana, prometheus, and the RedisTimeSeries adapter is provided. Put differently, you need to provide the RedisEdge server and a video stream.
To use it, first make sure that you start your RedisEdge server, e.g.:
$ ./redisedge/run.sh
Then, you can run the rest of the stack with:
$ docker-compose -f docker-compose.local.yaml up
Finally, make sure you actually start capturing something locally, e.g.:
$ python app/capture.py app/data/countvoncount.mp4
Note: when switching between fully- and partially-dockerized runs, make sure you rebuild (e.g. docker-compose up --build
).
According to current wisdom, it is impossible to use the webcam from a Docker container on macOS. To work around that, always run the capture.py
locally on the host.
According to current wisdom, 'host' mode networking is a myth on macOS. Hence, the partially-dockerized mode is not available. TL;DR - it is either (almost) fully-dockerized or local for you.
python init.py --device GPU
).Both the top tool and the Grafana UI provide the following performance metrics:
Metrics sampled by capturing the Count's video and using the application's top tool:
Hardware | OS | Dockerized | Device | in_fps | out_fps | prf_read | prf_resize | prf_model | prf_script | prf_boxes | prf_store | prf_total |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Macbook Pro 15, 2015 | macOS Mojave | Yes | CPU (no1) | 30.0 | 4.5 | 16.1 | 4.4 | 167.5 | 8.6 | 1.9 | 0.5 | 199.0 |
Macbook Pro 15, 2015 | Ubuntu 18.04 | Yes | CPU (no1) | 30.0 | 6.0 | 19.2 | 3.2 | 121.2 | 2.6 | 2.3 | 0.2 | 149.0 |
Macbook Pro 15, 2015 | Ubuntu 18.04 | No | CPU (yes2) | 30.0 | 10.0 | 15.1 | 3.3 | 61.7 | 2.1 | 2.1 | 0.2 | 84.3 |
AWS EC2 p3.8xlarge | Ubuntu 16.043 | No | CPU (no1) | 30.0 | 10.0 | 17.9 | 2.9 | 49.7 | 4.5 | 2.5 | 0.2 | 77.7 |
AWS EC2 p3.8xlarge | Ubuntu 16.043 | No | CPU (yes2) | 30.0 | 11.0 | 17.6 | 3.9 | 31.6 | 11.9 | 2.4 | 0.2 | 67.6 |
AWS EC2 p3.8xlarge | Ubuntu 16.043 | No | GPU | 30.0 | 30.0 | 16.9 | 3.0 | 2.9 | 1.6 | 1.8 | 0.2 | 26.1 |
Stack versions:
Notes:
The application's rendered video stream (server.py) should be at http://localhost:5000/video.
The Docker Compose setup also comes with a pre-provisioned Grafana server - it should be at http://localhost:3000/ (admin/admin). It is configured with the Prometheus data source and video dashboard, so once you log in:
Note: if you're installing this project on something that isn't your localhost, you'll need to put its hostname or IP address in the URL of the 'camera:0' Grafana panel for the video to show.