Skip to content

zoq/Pose2Mesh_RELEASE

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose

pose to mesh quality results

Introduction

This repository is the offical Pytorch implementation of Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose (ECCV 2020). Below is the overall pipeline of Pose2Mesh. overall pipeline

Install guidelines

  • We recommend you to use an Anaconda virtual environment. Install PyTorch >= 1.2 according to your GPU driver and Python >= 3.7.2, and run sh requirements.sh.

Quick demo

  • Download the pre-trained Pose2Mesh according to this.
  • Prepare SMPL and MANO layers according to this
  • Prepare a pose input, for instance, as input.npy. input.npy should contain the coordinates of 2D human joints, which follow the topology of joint sets defined here. The joint orders can be found in each ${ROOT}/data/*/dataset.py.
  • Run python demo/run.py --gpu 0 --res 500 --input input.npy --joint_set {human36,coco,smpl,mano}
  • The outputs demo_2Dpose.png, demo_mesh.png will be saved in ${ROOT}/demo/result/.
  • To visualize the Pose2Mesh predictions on COCO validation set, prepare COCO data according to this, and run python demo/run_coco.py --gpu 0 --res 500 --sample_num 10.

Results

Here I report the performance of Pose2Mesh.

Update: The performance on 3DPW has increased using DarkPose 2D detection, which improved HRNet.

table

Below shows the results when the input is groundtruth 2D human poses. For Human3.6M benchmark, Pose2Mesh is trained on Human3.6M. For 3DPW benchmark, Pose2Mesh is trained on Human3.6M and COCO.

MPJPE PA-MPJPE
Human36M 51.28 mm 35.61 mm
3DPW 63.10 mm 35.37 mm

We provide qualitative results on SURREAL to show that Pose2Mesh can recover 3D shape to some degree. Please refer to the paper for more discussion.

surreal quality results

Directory

Root

The ${ROOT} is described as below.

${ROOT} 
|-- data
|-- demo
|-- lib
|-- experiment
|-- main
|-- manopth
|-- smplpytorch
  • data contains data loading codes and soft links to images and annotations directories.
  • demo contains demo codes.
  • lib contains kernel codes for Pose2Mesh.
  • main contains high-level codes for training or testing the network.
  • experiment contains the outputs of the system, whic include train logs, trained model weights, and visualized outputs.

Data

The data directory structure should follow the below hierarchy.

${ROOT}  
|-- data  
|   |-- Human36M  
|   |   |-- images  
|   |   |-- annotations   
|   |   |-- J_regressor_h36m_correct.npy
|   |   |-- absnet_output_on_testset.json 
|   |-- MuCo  
|   |   |-- data  
|   |   |   |-- augmented_set  
|   |   |   |-- unaugmented_set  
|   |   |   |-- MuCo-3DHP.json
|   |   |   |-- smpl_param.json
|   |-- COCO  
|   |   |-- images  
|   |   |   |-- train2017  
|   |   |   |-- val2017  
|   |   |-- annotations  
|   |   |-- J_regressor_coco.npy
|   |   |-- hrnet_output_on_valset.json
|   |-- PW3D 
|   |   |-- data
|   |   |   |-- 3DPW_train.json
|   |   |   |-- 3DPW_validation.json
|   |   |   |-- 3DPW_test.json
|   |   |   |-- darkpose_output_on_testset.json
|   |   |   |-- hrnet_output_on_testset.json
|   |   |   |-- simple_output_on_testset.json
|   |   |-- imageFiles
|   |-- AMASS
|   |   |-- data
|   |   |   |-- cmu
|   |-- SURREAL
|   |   |-- data
|   |   |   |-- train.json
|   |   |   |-- val.json
|   |   |   |-- hrnet_output_on_testset.json
|   |   |   |-- simple_output_on_testset.json
|   |   |-- images
|   |   |   |-- train
|   |   |   |-- test
|   |   |   |-- val
|   |-- FreiHAND
|   |   |-- data
|   |   |   |-- training
|   |   |   |-- evaluation
|   |   |   |-- freihand_train_coco.json
|   |   |   |-- freihand_train_data.json
|   |   |   |-- freihand_eval_coco.json
|   |   |   |-- freihand_eval_data.json
|   |   |   |-- hrnet_output_on_testset.json
|   |   |   |-- simple_output_on_testset.json

If you have a problem with 'download limit' when trying to download datasets from google drive links, please try this trick.

  • Go the shared folder, which contains files you want to copy to your drive
  • Select all the files you want to copy
  • In the upper right corner click on three vertical dots and select “make a copy”
  • Then, the file is copied to your personal google drive account. You can download it from your personal account.

Pytorch SMPL and MANO layer

  • For the SMPL layer, I used smplpytorch. The repo is already included in ${ROOT}/smplpytorch.
  • Download basicModel_f_lbs_10_207_0_v1.0.0.pkl, basicModel_m_lbs_10_207_0_v1.0.0.pkl, and basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here (female & male) and here (neutral) to ${ROOT}/smplpytorch/smplpytorch/native/models. For the MANO layer, I used manopth. The repo is already included in ${ROOT}/manopth. Download MANO_RIGHT.pkl from here at ${ROOT}/manopth/mano/models.

Experiment

The experiment directory will be created as below.

${ROOT}  
|-- experiment  
|   |-- exp_*  
|   |   |-- checkpoint  
|   |   |-- graph 
|   |   |-- vis 
  • experiment contains train/test results of Pose2Mesh on various benchmark datasets. We recommed you to create the folder as a soft link to a directory with large storage capacity.

  • exp_* is created for each train/test command. The wildcard symbol refers to the time of the experiment train/test started. Default timezone is UTC+9, but you can set to your local time.

  • checkpoint contains the model checkpoints for each epoch.

  • graph contains visualized train logs of error and loss.

  • vis contains *.obj files of meshes and images with 2D human poses or human meshes.

Pretrained model weights

Download pretrained model weights from here to a corresponding directory.

${ROOT}  
|-- experiment  
|   |-- posenet_human36J_train_human36 
|   |-- posenet_cocoJ_train_human36_coco_muco
|   |-- posenet_smplJ_train_surreal
|   |-- posenet_manoJ_train_freihand
|   |-- pose2mesh_human36J_train_human36
|   |-- pose2mesh_cocoJ_train_human36_coco_muco
|   |-- pose2mesh_smplJ_train_surreal
|   |-- pose2mesh_manoJ_train_freihand
|   |-- posenet_human36J_gt_train_human36
|   |-- posenet_cocoJ_gt_train_human36_coco
|   |-- pose2mesh_human36J_gt_train_human36
|   |-- pose2mesh_cocoJ_gt_train_human36_coco

Running Pose2Mesh

joint set topology

Start

  • Pose2Mesh uses different joint sets from Human3.6M, COCO, SMPL, and MANO for Human3.6M, 3DPW, SURREAL, and FreiHAND benchmarks respectively. For the COCO joint set, we manually add 'Pelvis' and 'Neck' joints by computing the middle point of 'L_Hip' and 'R_Hip', and 'L_Shoulder' and 'R_Shoulder' respectively.
  • In the lib/core/config.py, you can change settings of the system including a train/test dataset to use, a pre-defined joint set, a pre-trained PoseNet, a learning schedule, GT usage, and so on.
  • Note that the first dataset on the DATASET.{train/test}_list should call build_coarse_graphs function for the graph convolution setting. Refer to the last line of __init__ function in ${ROOT}/data/Human36M/dataset.py.

Train

Select the config file in ${ROOT}/asset/yaml/ and train. You can change the train set and pretrained posenet by your own *.yml file.

1. Pre-train PoseNet

To train from the scratch, you should pre-train PoseNet first.

Run

python main/train.py --gpu 0,1,2,3 --cfg ./asset/yaml/posenet_{input joint set}_train_{dataset list}.yml

2. Train Pose2Mesh

Copy best.pth.tar in ${ROOT}/experiment/exp_*/checkpoint/ to ${ROOT}/experiment/posenet_{input joint set}_train_{dataset list}/. Or download the pretrained weights following this.

Run

python main/train.py --gpu 0,1,2,3 --cfg ./asset/yaml/pose2mesh_{input joint set}_train_{dataset list}.yml

Test

Select the config file in ${ROOT}/asset/yaml/ and test. You can change the pretrained model weight. To save sampled outputs to obj files, change TEST.vis value to True in the config file.

Run

python main/test.py --gpu 0,1,2,3 --cfg ./asset/yaml/{model name}_{input joint set}_test_{dataset name}.yml

Reference

@InProceedings{Choi_2020_ECCV_Pose2Mesh,  
author = {Choi, Hongsuk and Moon, Gyeongsik and Lee, Kyoung Mu},  
title = {Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose},  
booktitle = {European Conference on Computer Vision (ECCV)},  
year = {2020}  
}  

About

Official Pytorch implementation of "Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose", ECCV 2020

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%