Skip to content

yangyongjx/lpcvc

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Winner Solution for 4th LPCVC

Authors

The 1st place winner of the 4th On-device Visual Intelligence Competition (OVIC) of Low-Power Computer Vision Challenge (LPCVC), both classification track and detection track. The challenge competes for the best accuracy given latency constraint deploying neural networks on mobile phones.

  • Tianzhe Wang
  • Han Cai
  • Shuai Zheng
  • Jia Li
  • Song Han

Description

The model submitted for the OVIC and implementation code for training and exportation.

  • OVIC track: Image Classification, Object Detection

Software

We use Google's Pixel2 to measure the real latency for our exported tflite model.

Model for Classification

Model Download MD5 checksum
33ms_top1@0.80585 Download Link 0091c33f6756b0494d967599695a1c3f
35ms_top1@0.7329 Download Link 3107acf731434762d87621d824165333
36ms_top1@0.73405 Download Link 833e3b56f034427b2a929cc44933a447

Model for Detection

Model Download MD5 checksum
mmlab-distill_23.6 Download Link d7945dc1dc52c9372db769facbda1f99

We provide tflite models for evaluation here. User can use the scripts in the corresponding folder to get checkpoint, frozen graph and tflite.

Algorithm: once-for-all networks

We address the challenging problem of efficient inference across many devices and resource constraints, especially on edge devices. We propose an Once-for-All Network (OFA, ICLR'2020) that supports diverse architectural settings by decoupling model training and architecture search. We can quickly get a specialized sub-network by selecting from the OFA network without additional training. We also propose a novel progressive shrinking algorithm, a generalized pruning method that reduces the model size across many more dimensions than pruning (depth, width, kernel size, and resolution), which can obtain a surprisingly large number of sub-networks (> 1019) that can fit different latency constraints. On edge devices, OFA consistently outperforms SOTA NAS methods (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1.5x faster than MobileNetV3, 2.6x faster than EfficientNet w.r.t measured latency) while reducing many orders of magnitude GPU hours and CO2 emission. In particular, OFA achieves a new SOTA 80.0% ImageNet top1 accuracy under 600M MACs. OFA is the winning solution for 4th Low Power Computer Vision Challenge, both classification track and detection track. Code and 50 pre-trained models on CPU/GPU/DSP/mobile CPU/mobile GPU (for different device & different latency constraints) are released at https://github.com/mit-han-lab/once-for-all.

80% top1 ImageNet accuracy with <600M MACs

OFA achieves 80.0% top1 accuracy with 595M MACs and 80.1% top1 accuracy with 143ms Pixel1 latency, setting a new SOTA ImageNet Top1 accuracy on the mobile device.

Consistently outperforms MobileNetV3/MobileNetV2 on Diverse hardware platforms

References

@inproceedings{
  cai2020once,
  title={Once for All: Train One Network and Specialize it for Efficient Deployment},
  author={Han Cai and Chuang Gan and Tianzhe Wang and Zhekai Zhang and Song Han},
  booktitle={International Conference on Learning Representations},
  year={2020},
  url={https://arxiv.org/pdf/1908.09791.pdf}
}
  • APQ: Joint Search for Network Architecture, Pruning and Quantization Policy (CVPR'2020) [GitHub] [arXiv] [Video]
@inproceedings{
  wang2020apq,
  title={APQ: Joint Search for Network Architecture, Pruning and Quantization Policy},  
  author={Wang, Tianzhe and Wang, Kuan and Cai, Han and Lin, Ji and Liu, Zhijian and Wang, Hanrui and Lin, Yujun and Han, Song},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  year={2020}
}

Training and Evaluation

See the corresponding folder for details.

License

Apache License 2.0

About

[LPIRC 2019, ICCV 2019] Winner Solution for 4th LPCVC

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%