In this repo, Video Highlight Detection (VHD) are talked about.
Baidu_VH is the first large-scale dataset about VHD, main experiments are based on it.
Proposed codes :
Tips:
Non-deep models now have achieved 42% mAP (IoU ranges from 0.5 to 0.95 on BaiduVH dataset, each gap is 0.05)on the validtion set. more information about non-deep, please refer to the 'non-deep' directory. Performance on deep models is going to be proposed later.
This is implemented by modified version of xiong's TAG. The experimental report/ relevant paper will propose soon.
Note that we only take xgb+lr yet(non-deep), and only 1% training data is used. We will refresh our result soon, by replacing xgb+lr with a mlp slightly and using full training set.