What is this?

This is the versatile robot platform. I've gave it a few usecases:

1. Surveillence robot usecase

android-app-screenshot1.jpg

2. Object follower usecase

object-follower-example1.png

3. Alexa voice robot commands demo

4. Building the robot parts & schameatics

Workarounds for not having a public ip on your dev board

Why does an intermediary arduino layer has to exist and not directly the Pi ?

Prerequisites

a. Ensure that your development board has an serial port. If your're using a Raspberry pi please ensure that the serial console it's disabled and the port can be used. In the config i've assumed it's on /dev/ttyS0

b. Your board works with a camera. If it's raspberry pi and picamera, ensure the camera is connected and enabled through raspi-config.

1. The first usecase is a surveillence robot that is controlled using an android interface:**

###############################################################################################

Full tutorial on instructables

A video demo is available on youtube

How does it work

a. The android app shows the uv4l streaming inside a webview. The uv4l process runs on the raspberry pi, captures video input from the camera and streams it. It's an awesome tool with many features

b. Using controls inside the android app lights and engines commands are issued to the MQTT server

c. The python server inside the docker container on the raspberry pi listens to MQTT commands and passes them using serial interface to the arduino board. The arduino board controlls the motors and the lights.

d. The arduino board senses distances in front and back of the robot and sends the data through the serial interface to the python server, the python forwards them to the MQTT and they get picked up by the android interface and shown to the user

flow-diagram.png

Extra

The robot will stream the video using UV4l

The android application is located in this repository

Installation for RaspberryPi

A complete tutorial about uv4l is found here: https://www.instructables.com/id/Raspberry-Pi-Video-Streaming/?ALLSTEPS

Clone the project in the home folder:

git clone https://github.com/danionescu0/robot-camera-platform

The folder location it's important because in docker-compose.yml the location is hardcoded as: /home/pi/robot-camera-platform:/root/debug If you need to change the location, please change the value in docker-compose too

Install Uv4l streamming:

chmod +x uv4l/install.sh
chmod +x uv4l/start.sh
sh ./uv4l/install.sh 

Warning you'll see warning messages: "The following signatures were invalid" because on latest raspbian operating system the uv4l packages are not fully supported.

Configure the project:

Test uv4l installation

a. Start it:

sh ./uv4l/start.sh 

b. Test it in the browser at the address: http://your_ip:9090/stream

c. Stop it

sudo pkill uv4l

Install docker and docker-compose

About docker installation: https://www.raspberrypi.org/blog/docker-comes-to-raspberry-pi/ About docker-compose installation: https://www.berthon.eu/2017/getting-docker-compose-on-raspberry-pi-arm-the-easy-way/

Auto starting services on reboot/startup

a. Copy the files from systemctl folder in systemctl folder to /etc/systemd/system/

b. Enable services:

sudo systemctl enable robot-camera.service
sudo systemctl enable robot-camera-video.service

c. Reboot

d. Optional, check status:

sudo systemctl status robot-camera.service
sudo systemctl status robot-camera-video.service

Build and install the Android app

a. Clone the repository

git clone https://github.com/danionescu0/android-robot-camera.git

b. Follow the instructions there to configure and build it

2. Object follower usecase

The robot is able to follow

Prerequisites:

Details: https://www.raspberrypi.org/documentation/remote-access/vnc/

Install dependencies

Troubleshooting:

sudo modprobe bcm2835-v4l2

Configuration

In navigation/config_navigation.py you'll find:

# minimum and maximum HSV touples for color object detector
# the color below is green
hsv_bounds = (
    (46, 83, 0),
    (85, 255, 212)
)

# minimum and maximum object size in percent of image width to be considered a valid detection
object_size_threshold = (4, 60)

#image is resized by width before processing to increase performance (speed)
#increasing "resize_image_by_width" will result in more accurate detection but slower processing
resize_image_by_width = 450

# angle to rotate camera in degreeds
rotate_camera_by = 180

Optional run unit tests

Unit tests are using nose2

In console run:

nose2

Running the colored object detector:

The colored object detector needs HSV calibration, to get a preview of the HSV bounds, you can use the tool located in navigation/visual_hsv_bounds.py like so:

Calibration, use the sliders to select some color (will be in white on the screen):

python navigation/visual_hsv_bounds.py 

Afer the calibration write the values in the /navigation/config_navigation.py under hsv_bounds

To run the colored object detector:

python3 object_tracking.py colored-object --show-video 

Running the object tracking script with no video output means omitting the --show-video parameter

Running the face detector:

python object_tracking.py specific-face --extra_cfg /path_to_a_picture_containing_a_face --show-video 

Running the person detector:

YouTube video: https://youtu.be/CLvkD5kB7xk

Console command:

python object_tracking.py tf-object-detector --show-video

How does the image detection works ?

1. colored object detector

For the code see "ColoredObjectDetector.py" file

2. face recognition

For the face recognition we're using "face_recognition" library to extract our specific face of interst from the image.

The problem is this library is quite slow on a development board even if we scale the image to 300 x 300 it takes more than a second for a detection. We'll use a new adition to Opencv 3.4 object trackers. These trackers are quite fast but far less accurate than "face_detection" technique

So i'm combining the two detectors to build a compromise between the two. First the "face_recognition" library runs in a different async process, and when a face is found, it communicates to the faster library the face coordonates. The faster method is called "TrackerCSRT_create" from the opencv library and it's able to run syncronious on the pi, processing frame by frame and help guide the robot.

For the code you can start with "SpecificFaceDetector.py" file

3. Alexa voice robot commands demo

Still work in progress !

Prerequisites For this you'll need:

Configure ngrok

 virtualenv voice_commands
 source voice_commands/bin/activate
 cd /home/pi/robot-camera-platform/
 pip install -r /home/pi/robot-camera-platform/requirements_voice_commands.txt
 source voice_commands/bin/activate
 cd /home/pi/robot-camera-platform/
 python voice_commands.py

4. Building the robot platform

The arduino sketch can be found in ./arduino-sketck folder.

Components

Fritzing schematic:

fritzig_sketch.png

Checklist:

The important thing to remember is that, the robot should have two DC (6-12V) motors and that each motor should be responsable for direction on left respectively right.

Pinout:

Led flashlight: D3

Left motor: PWM (D5), EN1, EN2(A4, A5)

Right motor: PWM (D6), EN1, EN2(A3, A2)

Infrared sensors: Front (A0), Back(A1)

Serial communication pins: Tx: D11, Rx: D10