Skip to content

anslt/ADL4CV

Repository files navigation

Learning a Better Neural Network Architecture for Multiple Object Tracking

This implementation is based on the CVPR 2020 (oral) paper Learning a Neural Solver for Multiple Object Tracking (Guillem Brasó, Laura Leal-Taixe) [Paper][Youtube][CVPR Daily]

The old implementation addreess is here.

Our 2 new mechanisms are showed below Method Visualization

Setup

If you want to look the original setup, please read old_README.md

  1. Clone and enter this repository:
    git clone --recursive https://github.com/anslt/ADL4CV.git
    cd mot_neural_solver
    
  2. (OPTIONAL) Download Anaconda if you work on Colab
    wget -c https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh
    chmod +x Anaconda3-2020.11-Linux-x86_64.sh
    bash ./Anaconda3-2020.11-Linux-x86_64.sh -b -f -p /usr/local
    rm Anaconda3-2020.11-Linux-x86_64.sh
    
  3. Create an Anaconda environment for this project:
    conda env create -f environment.yaml
    conda activate mot_neural_solver
    pip install -e tracking_wo_bnw
    pip install -e .
    
  4. Download the MOTChallenge data, reid network and preprocessed detection:
    bash scripts/setup/download_motcha.sh
    bash scripts/setup/download_models.sh
    bash scripts/setup/download_prepr_dets.sh
    
  5. (OPTIONAL) Install other lacking package if you work on Colab:
    conda install -y ipykernel
    

Training

For other parameters not including in old_README.md in training, we introduce below:

graph_model_params:
   time_aware: whether the node updating is time aware (defualt: False)
   attention:
     use_attention: whether use attention (defualt: False)
     alpha: LeakyRelu parameter (deafualt: 0.2)
     attention_head_num: the number of attention network applied in MPN (deafualt: 2)
     att_regu: apply regularization on MPN (deafualt: False)
     att_regu_strength: weight of regularization (deafualt: 0.5)
     new_softmax: apply regularization on MPN (deafualt: False)

   dynamical_graph:
     graph_pruning: whether use graph (deafualt: False)
     first_prune_step: from which iteration we start prune the graph(deafualt: 4)
     prune_factor: how many edges are pruned in each iteration(deafualt: 0.05)
     prune_frequency: the frequency prune the graph(deafualt: 1)
     mode: the score is generated in which method (deafualt: "classifier node wise")
          ["classifier node wise","classifier naive"]
     prune_min_edge: the threshold of linked edges to stop pruning for a node (deafualt: 5)

Our Settiings

We use the cross_val_split 2. We train a model with regularized attention by running:

python scripts/train.py with cross_val_split=2 train_params.save_every_epoch=True train_params.num_epochs=25 train_params.num_workers=4 graph_model_params.attention.use_attention=True graph_model_params.attention.att_regu=True graph_model_params.attention.new_softmax=True 

We train a model with edge pruning:

python scripts/train.py with cross_val_split=2 train_params.save_every_epoch=True train_params.num_epochs=25 train_params.num_workers=4 graph_model_params.dynamical_graph.graph_pruning=True

Visualization

We use our attention + edge pruning to generate a small demo by clicking here.

Citation

If you use our work in your research, please cite the original publication:

    @InProceedings{braso_2020_CVPR,
    author={Guillem Brasó and Laura Leal-Taixé},
    title={Learning a Neural Solver for Multiple Object Tracking},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
}

Please, also consider citing Tracktor if you use it for preprocessing detections:

  @InProceedings{tracktor_2019_ICCV,
  author = {Bergmann, Philipp and Meinhardt, Tim and Leal{-}Taix{\'{e}}, Laura},
  title = {Tracking Without Bells and Whistles},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2019}}

About

TUM ADL4CV class

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published