Skip to content

Weakly Aligned Cross-Modal Learning for Multispectral Pedestrian Detection, ICCV, 2019

License

Notifications You must be signed in to change notification settings

luzhang16/AR-CNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aligned Region-CNN

Created by Lu Zhang, Institute of Automation, Chinese Academy of Science.

Introduction

We propose a novel detector, called AR-CNN, that tackles the practical position shift problem in multispectral pedestrian detection. For more details, please refer to our paper.

KAIST-Paired Annotation

The KAIST-Paired annotation is available through [Google Drive] and [BaiduYun]. If you have any problem, please feel free to contact me.

Preparation

First of all, clone the code:

git clone https://github.com/luzhang16/AR-CNN.git

Then, create a folder:

cd $AR-CNN && mkdir data

prerequisites

  • Python 2.7 or 3.6
  • Pytorch 0.4.0
  • CUDA 8.0 or higher

Data Preparation

It is recommended to symlink the dataset root to $AR-CNN/data.

AR-CNN
├── cfgs
├── lib
├── data
│   ├── kaist-paired
│   │   ├── annotations
│   │   ├── images
│   │   ├── splits

KAIST dataset: Please follow the instructions in rgbt-ped-detection to prepare KAIST dataset.

KAIST-Paired annotation: Google Drive or BaiduYun.

Trainval & Test splits: mv $AR-CNN/splits/ $AR-CNN/data/kaist-paired/

Pretrained Model

We use VGG16 pretrained models in our experiments. You can download the model from:

Download them and put them into the data/pretrained_model/.

Compilation

As pointed out by ruotianluo/pytorch-faster-rcnn, choose the right -arch in make.sh file, to compile the CUDA code:

GPU model Architecture
TitanX (Maxwell/Pascal) sm_52
GTX 960M sm_50
GTX 1080 (Ti) sm_61
Grid K520 (AWS g2.2xlarge) sm_30
Tesla K80 (AWS p2.xlarge) sm_37

More details about setting the architecture can be found here or here

Install all the python dependencies using pip:

pip install -r requirements.txt

Compile the cuda dependencies using following simple commands:

cd lib
sh make.sh

It will compile all the modules you need, including NMS, ROI_Pooing, ROI_Align and ROI_Crop. The default version is compiled with Python 2.7, please compile by yourself if you are using a different python version.

As pointed out in this issue, if you encounter some error during the compilation, you might miss to export the CUDA paths to your environment.

Test

If you want to get the detection results on KAIST "reasonable" test set, simply run:

python test_net.py --dataset kaist --net vgg16 \
                   --checksession $SESSION --checkepoch $EPOCH --checkpoint $CHECKPOINT \
                   --reasonable --cuda

Specify the specific model session, checkepoch and checkpoint, e.g., SESSION=1, EPOCH=3, CHECKPOINT=17783.

If you want to run with our pretrained model, download the pretrained model through Google Drive or BaiduYun (pwd: 3bxs).

Evaluate the output

You can use the evaluation script provided by the original KAIST dataset or this matlab evaluation tool: Google Drive or BaiduYun (pwd: 41qk)

Robustness test with manual position shift

If you want to get the detection results under the metric S, simply run:

sh test_shift.sh

If you want to get the detection results under certain position shift, simply run:

python test_net.py --dataset pascal_voc --net vgg16 \
                   --checksession $SESSION --checkepoch $EPOCH --checkpoint $CHECKPOINT \
                   --reasonable --cuda --sx 4 --sy -4

Note that sx and sy denote the pixels of position shift along x-axis and y-axis respectively.

Acknowledgement

We appreciate much the code faster-rcnn.pytorch developed by Jianwei Yang and Jiasen Lu. This code is built mostly based on it.

About

Weakly Aligned Cross-Modal Learning for Multispectral Pedestrian Detection, ICCV, 2019

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published