Skip to content

Code for paper Sketch Me That Shoe

Notifications You must be signed in to change notification settings

umdreamer/DeepSBIR

 
 

Repository files navigation

Sketch Me That Shoe

###Introduction

This repository contains the code for the CVPR paper ‘Sketch Me That Shoe’, which is a deep learning based implementation of fine-grained sketch-based image retrieval.

For more details, please visit our project page: http://www.eecs.qmul.ac.uk/~qian/Project_cvpr16.html

New: Tensorflow implementation can be found here: https://github.com/yuchuochuo1023/Deep_SBIR_tf/tree/master.

And if you use the code for your research, please cite our paper:

@inproceedings{qian2016,
    Author = {Qian Yu, Feng Liu, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales and Chen Change Loy},
    Title = {Sketch Me That Shoe},
    Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    Year = {2016}
}

####Contents

  1. License

  2. Installation

  3. Run the demo

  4. Re-training the model

  5. Extra comment

###License MIT License

###Installation

  1. Download the repository

    git clone git@github.com:seuliufeng/DeepSBIR.git
  2. Build Caffe and pycaffe

    a. Go to folder $SBIR_ROOT/caffe_sbir

    b. modify the path in Makefile.config, to use this code, you have to compile with python layer

      WITH_PYTHON_LAYER := 1

    c. Compile caffe shell make –j32 && make pycaffe

  3. Go to fold $SBIR_ROOT, and run

    source bashsbir

###Run the demo

  1. To run the demo, please first download our database and models. Go to the root folder of this project, and run

    chmod +x download_data.sh
    ./download_data.sh

Note: You can also download them manually from our project page: http://www.eecs.qmul.ac.uk/~qian/Project_cvpr16.html

  1. Run the demo:
python $SBIR_ROOT/tools/sbir_demo.py

###Re-training the model

  1. Go to the root folder of this project

    cd $SBIR_ROOT
  2. Run the command

./experiments/train_sbir.sh

Note: Please make sure the initial model ‘/init/sketchnet_init.caffemodel’ be under the folder experiments/. This initial model can be downloaded from our project page.

###Extra comment

  1. All provided models and codes are optimised version. And our latest result is shown below:

    Dataset acc.@1 acc.@10 %corr.
    Shoes 52.17% 92.17% 72.29%
    Chairs 72.16% 98.96% 74.36%

Further explanation: The model we reported in our paper is trained by our originally collected sketches which contain much noise. In order to improve usability, we cleaned the sketch images(removed some noise) after CVPR2016 deadline. You can compare images 'test_shoes_370.png' and '370.jpg' (or 'test_chairs_230.png'/'230.jpg') to see the difference. We re-trained our model using clean sketch images and the new results are listed above. Both the model and dataset we released now is the latest version. Sorry for any confusion we may bring about. If you have further questions, please email q.yu@qmul.ac.uk.

  1. This project used codes of the following project:

    Caffe trainnet python wrapper and python data layer

    L2 normalization layer

    Triplet loss

About

Code for paper Sketch Me That Shoe

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 58.7%
  • C++ 32.4%
  • Python 4.2%
  • Cuda 2.3%
  • CMake 1.2%
  • MATLAB 0.4%
  • Other 0.8%