src/caffe/layers/curious_layer.cpp
and src/caffe/layers/curious_layer.cu
are implemented according to this Quantized Convolutional Neural Networks for Mobile Devices.
The quantization needs to be operated in python program.
Reducing the errors of the whole model after quantization is not included.
curious_layer
in branch gbd
pruned the parameters according to the Fast ConvNets Using Group-wise Brain Damage.
The pruning needs to be operated in python program.
It supports gradients backward.
The time complexity of im2col
and col2im
processes are not reduced compared to Convolution layer on GPU. This is due to the irregualr pruning.
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.
Check out the project site for all the details like
- DIY Deep Learning for Vision with Caffe
- Tutorial Documentation
- BVLC reference models and the community model zoo
- Installation instructions
and step-by-step examples.
Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.
Happy brewing!
Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.
Please cite Caffe in your publications if it helps your research:
@article{jia2014caffe,
Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
Journal = {arXiv preprint arXiv:1408.5093},
Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
Year = {2014}
}