Skip to content

huqinghao/convert_cifar_for_hashing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

convert_cifar_for_hashing

In terms of hashing and quantization, people usually split the cifar10 dataset to a training dataset containg 59000 images and a testing dataset contrain 1000 images. And Caffe is very popular with the Deep Learning is getting hotter.
So We split the cifar10 dataset and save it to the LMDB which is the default dataset storage format of Caffe.
In addition, We noticed that people want to use the AlexNet to train Cifar10 for hashing and int this case we need resize the image. Usage:
python convert_cifar_for_hashing.py [--resize=256] path-of-cifar10-batches-py [the-path-of-lmdb]
Note:remember add the path of 'caffe/python' to the environment variable PYTHONPATH

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages