[PYTORCH] YOLO (You Only Look Once)

Introduction

Here is my pytorch implementation of the model described in the paper YOLO9000: Better, Faster, Stronger paper.


An example of my model's output.

How to use my code

With my code, you can:

Requirements:

Datasets:

I used 4 different datases: VOC2007, VOC2012, COCO2014 and COCO2017. Statistics of datasets I used for experiments is shown below

Dataset Classes #Train images/objects #Validation images/objects
VOC2007 20 5011/12608 4952/-
VOC2012 20 5717/13609 5823/13841
COCO2014 80 83k/- 41k/-
COCO2017 80 118k/- 5k/-

Create a data folder under the repository,

cd {repo_root}
mkdir data

Setting:

Epoches Learning rate
0-4 1e-5
5-79 1e-4
80-109 1e-5
110-end 1e-6

Trained models

You could find all trained models I have trained in YOLO trained models

Training

For each dataset, I provide 2 different pre-trained models, which I trained with corresresponding dataset:

You could specify which trained model file you want to use, by the parameter pre_trained_model_type. The parameter pre_trained_model_path then is the path to that file.

If you want to train a model with a VOC dataset, you could run:

If you want to train a model with a COCO dataset, you could run:

If you want to train a model with both COCO datasets (training set = train2014 + val2014 + train2017, val set = val2017), you could run:

Test

For each type of dataset (VOC or COCO), I provide 3 different test scripts:

If you want to test a trained model with a standard VOC dataset, you could run:

If you want to test a model with some images, you could put them into the same folder, whose path is path/to/input/folder, then run:

If you want to test a model with a video, you could run :

Experiments:

I trained models in 2 machines, one with NVIDIA TITAN X 12gb GPU and the other with NVIDIA quadro 6000 24gb GPU.

The training/test loss curves for each experiment are shown below:

Statistics for mAP will be updated soon ...

Results

Some output predictions for experiments for each dataset are shown below: