The code is tested under TF1.9.0 GPU version and Python 3.6.8
Some basic operations, like farthest point sampling, are from the implementation of PointNet++.
The TF operators are included under tf_ops
, you need to compile them (check tf_xxx_compile.sh
under each ops subfolder) first. Update nvcc
and python
path if necessary. The code is tested under TF1.2.0. If you are using earlier version it's possible that you need to remove the -D_GLIBCXX_USE_CXX11_ABI=0
flag in g++ command in order to compile correctly.
To compile the operators in TF version >=1.4, you need to modify the compile scripts slightly.
First, find Tensorflow include and library paths.
TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
Then, add flags of -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework
to the g++
commands.
We used the pre-processed data of Pointnet++. To get the direction dataset, run the code in code_for_directions
.
You can get the sampled point clouds of ModelNet40 (XYZ and normal from mesh, 10k points per shape) here (1.6GB). Move the uncompressed data folder to data/modelnet40_normal_resampled
And you should move the direction dataset to data/modelnet40_normal_resampled/patch_mat/directions
ModelNet40:
python train_di_cnn.py --normal
python eval_di_cnn.py --num_votes 12 --normal
Preprocessed ShapeNetPart dataset (XYZ, normal and part labels) can be found here (674MB). Move the uncompressed data folder to data/shapenetcore_partanno_segmentation_benchmark_v0_normal
Then you need to move the direction dataset to data/shapenetcore_partanno_segmentation_benchmark_v0_normal/directions_seg
ShapeNet:
python train_di_seg.py
python eval_di_seg.py --repeat_num 24
S3DIS needs to be pre-processed by partitioning blocks, which is from the implementation of Pointcnn. Run the code in S3DIS
.