Pointnet2.ScanNet

PointNet++ Semantic Segmentation on ScanNet in PyTorch with CUDA acceleration based on the original PointNet++ repo and the PyTorch implementation with CUDA

Performance

The semantic segmentation results in percentage on the ScanNet train/val split in data/.

AvgFloorWallCabinetBedChairSofaTableDoorWindowBookshelfPictureCounterDeskCurtainRefrigeratorBathtubShowerToiletSinkOthers
50.6290.9663.8735.2156.7562.4368.4647.1536.1234.1225.6223.5841.4642.7332.3844.1264.9363.9074.0458.1346.40

The pretrained models: SSG MSG

Installation

Requirements

Install

Install this library by running the following command:

cd pointnet2
python setup.py install

Configure

Change the path configurations for the ScanNet data in lib/config.py

Prepare multiview features (optional)

  1. Download the ScanNet frames here (~13GB) and unzip it.

  2. Extract the multiview features from ENet:

    python compute_multiview_features.py
  3. Generate the projection mapping between image and point cloud

    python compute_multiview_projection.py
  4. Project the multiview features from image space to point cloud

    python project_multiview_features.py

Usage

preprocess ScanNet scenes

Parse the ScanNet data into *.npy files and save them in preprocessing/scannet_scenes/

python preprocessing/collect_scannet_scenes.py

sanity check

Don't forget to visualize the preprocessed scenes to check the consistency

python preprocessing/visualize_prep_scene.py --scene_id <scene_id>

The visualized <scene_id>.ply is stored in preprocessing/label_point_clouds/

train

Train the PointNet++ semantic segmentation model on ScanNet scenes

python train.py

The trained models and logs will be saved in outputs/<time_stamp>/

Note: please refer to train.py for more training settings

eval

Evaluate the trained models and report the segmentation performance in point accuracy, voxel accuracy and calibrated voxel accuracy

python eval.py --folder <time_stamp>

vis

Visualize the semantic segmentation results on points in a given scene

python visualize.py --folder <time_stamp> --scene_id <scene_id>

The generated <scene_id>.ply is stored in outputs/<time_stamp>/preds. See the class palette here

Acknowledgement