mxnet-centernet

MXNet port of CenterNet (https://github.com/xingyizhou/CenterNet)

Objects as Points, Xingyi Zhou, Dequan Wang, Philipp Krähenbühl, arXiv technical report (arXiv 1904.07850)

Abstract

Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point -- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1% AP at 142 FPS, 37.4% AP at 52 FPS, and 45.1% AP with multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time.

Overview

CenterNet is a generic network design that works for various regression tasks. The offical code solves the problems of: (1) 2D object detection, (2) 3D object detection and (3) multi-person pose estimation.

Objects are represented as points, which spatially locate these objects. Other attributes related to the objects are regressed accordingly. CenterNet is simpler in concept than previous single-shot object detectors:

What's done

TODO

Example commands

(1) 2D Object Detection (2DOD)

(2) 3D Object Detection (3DOD)

(3) 2D Multi-Person Human Pose

Official Implementation by Xingyi Zhou

Other Ports