In terms of hashing and quantization, people usually split the cifar10 dataset to a training dataset containg 59000 images and a testing dataset contrain 1000 images. And Caffe is very popular with the Deep Learning is getting hotter.
So We split the cifar10 dataset and save it to the LMDB which is the default dataset storage format of Caffe.
In addition, We noticed that people want to use the AlexNet to train Cifar10 for hashing and int this case we need resize the image.
Usage:
python convert_cifar_for_hashing.py [--resize=256] path-of-cifar10-batches-py [the-path-of-lmdb]
Note:remember add the path of 'caffe/python' to the environment variable PYTHONPATH
-
Notifications
You must be signed in to change notification settings - Fork 0
huqinghao/convert_cifar_for_hashing
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published