Skip to content

caomw/structure-completion

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Data-driven Structural Priors for Shape Completion

Minhyuk Sung, Vladimir G. Kim, Roland Angst, and Leonidas Guibas
SIGGRAPH Asia 2015

Citation:
Please cite our paper if you use this code:

Minhyuk Sung, Vladimir G. Kim, Roland Angst, and Leonidas Guibas
Data-driven Structural Priors for Shape Completion,
ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2015)

=================

Installation

See 'lib/install_libraries.sh' for required libraries.

Prepare a new dataset

  1. Prepare mesh files and label files (*.off, *.gt)
    Make sure that all mesh files have UNIT LENGTH from the bounding box center to the farthest point.

  2. Create a mesh and label file directory for the new dataset
    ($shape2pose)/data/1_input/($dataset_name)
    This directory should have both off and gt directories, which are mesh(.off) and label(.gt) file directories, respectively.

  3. Create a information file directory for the new dataset
    ($shape2pose)/data/0_body/($dataset_name)
    This directory should have following files:

    • regions.txt: This file is for both shape2pose code and cuboid-prediction code.
      Each line shows part (part_name) pnts 1
      The first line part corresponds to label number 0, the next line corresponds to label number 1, and so on.

    • regions_symmetry.txt: This file is for both shape2pose code and cuboid-prediction code.
      Each line shows a set of symmetric parts (part_name_1) (part_name_2) ... (part_name_k)
      All part names should appear in regions.txt file.
      If a part has no symmetric parts, it should also be written in a line without any other part name.

    • symmetry_groups.txt: This file is only for cuboid-prediction code.
      In contrast to regions_symmetry.txt file which has symmetric part information for learning local classifiers,
      symmetry_groups.txt file has information of symmetric parts in terms of the part structure.
      For example, all legs of chairs are considered as symmetric each other when training local classifiers, but the front legs and rear legs are considered as separate symmetric groups in the part structure.

      Each symmetry group should be recorded in the following format:

      symmetry_group (rotation/reflection) (axis_index:[0,1,2])
      single_label_indices (label_number_0 label_number_1 ... label_number_k)
      pair_label_indices (label_number_pair_a_0 label_number_pair_b_0 ... label_number_pair_a_k label_number_pair_b_k)
      

      A symmetry group can be either rotation group or reflection group.
      Axis index means part local axis index ([x, y, z] → [0, 1, 2]) of reflection plane normal or rotation axis.
      Single label indices indicate single parts which are symmetric in terms of the symmetry axis.
      Pair label indices indicate pair of parts which are symmetric in terms of the symmetry axis.
      Each pair of successive label numbers label_number_pair_a_i label_number_pair_b_i shows a (i-th) pair.
      A rotation symmetry group MUST NOT has pair label indices (currently not supported),
      and also a reflection symmetry group MAY NOT have pair label indices (optional).

      Ex)

      symmetry_group reflection 0
      single_label_indices 0
      pair_label_indices 1 2 3 4
      

      Let [+, +, +] indicates that a cuboid corner of part, which has position
      (center_x + 0.5 * size_x, center_y + 0.5 * size_y, center_z + 0.5 * size_z)
      The [-, +, +] corner of part with label 1 is symmetric with the [+, +, +] corner of part with label 2.


Train/test local point classifiers

Make sure that all files are prepared as mentioned above.

  1. [IMPORTANT]
    Copy regions.txt and regions_symmetry.txt files in ($shape2pose)/data/0_body/($dataset_name) to ($shape2pose)/data/0_body.
    Double check these files.

  2. Make a ($dataset_name)_all.txt file in ($shape2pose)/script/examples.
    This file should have all mesh file names (without extension).
    If you train a subset of files, make the name list file with the subset.

  3. [IMPORTANT]
    If you do cross-validation, copy trainRegions_cv.py and predictRegions_cv.py files to trainRegions.py and predictRegions.py in ($shape2pose)/script/scriptlibs, respectively.
    If you run for the subset of meshes and don't do cross-validation, copy trainRegions_origin.py and predictRegions_origin.py files to trainRegions.py and predictRegions.py in ($shape2pose)/script/scriptlibs, respectively.

  4. For training, run the following command in ($shape2pose)/script:
    ./train.py ($dataset_name) exp1_($dataset_name) examples/($dataset_name)_all.txt

    The classifier files (train_(part_name).arff, weka_(part_name).model) are generated in
    ($shape2pose)/data/3_trained/classifier/exp1_($dataset_name)
    If you do cross-validation, the classifier files are generated in each mesh name directory.
    Make sure that all mesh name directories have the same number of files (classifier files for all parts).

  5. For testing, run the following command in ($shape2pose)/script:
    ./test.py ($dataset_name) exp1_($dataset_name) examples/($dataset_name)_all.txt

    The prediction files are generated in
    ($shape2pose)/data/4_experiments/classifier/exp1_($dataset_name)


Run experiments

  1. Compile code
    In ../../build/OSMesaViewer/build, make.

  2. Make an experiment directory
    ($cuboid-prediction)/experiments/exp1_($dataset_name)
    This directory should have following files:

    • arguments.txt: The following is the example of arguments.

      --data_root_path=($shape2pose)
      --label_info_path=data/0_body/($dataset_name)/
      --mesh_path=data/1_input/($dataset_name)/off/
      --sample_path=data/2_analysis/($dataset_name)/points/even1000/
      --dense_sample_path=data/2_analysis/($dataset_name)/points/random100000/
      --mesh_label_path=data/1_input/($dataset_name)/gt/
      --sample_label_path=data/4_experiments/exp1_($dataset_name)/1_prediction/
      

    [IMPORTANT]
    Make sure that data_root_path is set correctly.

    • pose.txt: Camera pose file for rendering.
  3. Run experiments
    In ($cuboid-prediction)/python, run the following command:
    ./batch_exec.py ($exp_type) ($shape2pose)/data/1_input/($dataset_name)/off/ ../experiments/($dataset_name)/

    Run the command in the following ($exp_type) order:

    1. ground_truth_cuboids: Create ground truth cuboids in ../experiments/($dataset_name)/training
      [IMPORTANT]
      After this, run the following command in ../experiments/($dataset_name) for generating part relation statistics files:
      ../../build/OSMesaViewer/build/Build/bin/OSMesaViewer --run_training --flagfile=arguments.txt

    2. prediction: Run our method. Files are generated in ../experiments/($dataset_name)/output

    3. part_assembly: Run part assembly. Files are generated in ../experiments/($dataset_name)/part_assembly

    4. symmetry_detection: Run symmetry detection. Files are generated in ../experiments/($dataset_name)/symmetry_detection
      [IMPORTANT]
      BEFORE this, run the following command in ($cuboid-prediction)/python:
      ./batch_symmetry_detection.py ($shape2pose)/data/1_input/($dataset_name)/off/ ../experiments/($dataset_name)/
      Make sure that binDir variable in ./batch_symmetry_detection.py is correctly set.

    5. baseline: Compute baseline. Files are generated in ../experiments/($dataset_name)/baseline

    6. render_assembly: Render part assembly cuboids. Should be executed after part assembly.

    7. render_evaluation [optional]: Re-render all experimental result images (including part assembly and symmetry detection). Used when rendering with new parameters.

    8. extract_symmetry_info [optional]: Used when extracting symmetry axes information of our method results.

  4. Generate HTML result pages
    In ($cuboid-prediction)/python, run the following command:
    ./generate_all.sh ../experiments/($dataset_name)/


The resulting files are created in the following directories:
`($cuboid-prediction)/output/exp1_($dataset_name)`
`($cuboid-prediction)/output/assembly_exp1_($dataset_name)`
`($cuboid-prediction)/output/symm_detection_exp1_($dataset_name)`
`($cuboid-prediction)/output/baseline_exp1_($dataset_name)`

For generating paper figures, run the following command in `($cuboid-prediction)/python/figures`:
`./fig_N.py fig_N.txt`
Select examples and record in `fig_N.txt` files.
Tex files and relates image files are generated in `($cuboid-prediction)/report` and `($cuboid-prediction)/report/images`.

Parameters

The followings are remarkable parameters (can be set by adding in the arguments.txt file):

  • occlusion_pose_filename:
    [IMPORTANT] If this is set to "", random occlusion pose is generated based on random_view_seed.
  • random_view_seed:
    Seed number of random occlusion view generation.
  • param_min_num_symmetric_point_pairs:
    If the number of symmetric point pairs is less than this value, the symmetric point pairs are not considered in optimization.
  • param_min_sample_point_confidence:
    In the initial step, only points with confidence greater than this value are clustered. A lower value can be better when there are noise in the input points.
  • param_sparse_neighbor_distance:
    Point neighbor distance (in most cases).
  • param_cuboid_split_neighbor_distance:
    Point neighbor distance for splitting initial cuboids.
  • param_occlusion_test_neighbor_distance:
    Point neighbor distance for occlusion test.
  • param_fusion_grid_size:
    Voxel size for fusion.
  • param_min_cuboid_overall_visibility:
    If the overall visibility of the missing cuboid is greater than this value, it is considered as created in the visible area, and ignored.
  • param_fusion_visibility_smoothing_prior:
    MRF smoothing prior value for fusion.
  • param_eval_min_neighbor_distance:
    Minimum error value for accuracy/completeness rendering.
    Run render_evaluation for rendering with new parameter values.
  • param_eval_max_neighbor_distance:
    Maximum error value for accuracy/completeness rendering.
    Run render_evaluation for rendering with new parameter values.
  • use_view_plane_mask:
    [IMPORTANT] Set true if one uses view plane 2D occlusion mask.
  • param_view_plane_mask_proportion:
    The view plane 2D occlusion mask is created so that this proportion of points are occluded more AFTER self-occlusion.

Experiment result files

($shape2pose)/data/0_body/, ($shape2pose)/data/1_input/
assembly_airplanes
assembly_bicycles
assembly_chairs
coseg_chairs
shapenet_tables
scan_chairs

($shape2pose)/scripts/examples
assembly_airplanes_all.txt
assembly_bicycles_all.txt
assembly_chairs_all.txt
coseg_chairs_all.txt
shapenet_tables_all.txt
scan_chairs_all.txt
coseg_chairs_all_N_train.txt, coseg_chairs_all_N_test.txt
('N' is proportion of training mesh files).

($shape2pose)/data/3_trained/classifier, ($shape2pose)/data/4_experiments
exp1_cv_assembly_airplanes_all.txt
exp1_cv_assembly_bicycles_all.txt
exp1_cv_assembly_chairs_all.txt
exp1_cv_coseg_chairs_all.txt
exp1_cv_shapenet_tables_all.txt
exp1_scan_chairs_all.txt
exp_coseg_chairs_N

($cuboid-prediction)/experiments
final3_assembly_airplanes_all
final3_assembly_bicycles_all
final3_assembly_chairs_all
final3_coseg_chairs_all
final3_shapenet_tables_all
test_scan_chairs
partial_coseg_chairs_N

Results with view plane mask with 30% proportion 2D occlusion:
view_mask_assembly_airplanes_all
view_mask_assembly_bicycles_all
view_mask_assembly_chairs_all
view_mask_coseg_chairs_all
view_mask_shapenet_tables_all

Releases

No releases published

Packages

No packages published

Languages

  • C++ 81.3%
  • Python 7.2%
  • MATLAB 5.4%
  • C 4.5%
  • CMake 1.2%
  • Shell 0.2%
  • GLSL 0.2%