Skip to content

sokunmin/Mixed-Depthwise-Convolutional-Kernels

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Implementing Mixed-Depthwise-Convolutional-Kernels using Pytorch (22 Jul 2019)

  • Author:
    • Mingxing Tan (Google Brain)
    • Quoc V. Le (Google Brain)
  • Paper Link

Method

캡처

  • By using a multi scale kernel size, performance improvements and efficiency were obtained.
  • Each kernel size has a different receptive field, so we can get different feature maps for each kernel size.

Experiment

Datasets Model Acc1 Acc5 Parameters (My Model, Paper Model)
CIFAR-10 MixNet-s (WORK IN PROCESS) 92.82% 99.79% 2.6M, -
CIFAR-10 MixNet-m (WORK IN PROCESS) 92.52% 99.78% 3.5M, -
CIFAR-10 MixNet-l (WORK IN PROCESS) 92.72% 99.79% 5.8M, -
IMAGENET MixNet-s (WORK IN PROCESS) 4.1M, 4.1M
IMAGENET MixNet-m (WORK IN PROCESS) 5.0M, 5.0M
IMAGENET MixNet-l (WORK IN PROCESS) 7.3M, 7.3M

Usage

python main.py
  • --data (str): the ImageNet dataset path

  • --dataset (str): dataset name, (example: CIFAR10, CIFAR100, MNIST, IMAGENET)

  • --batch-size (int)

  • --num-workers (int)

  • --epochs (int)

  • --lr (float): learning rate

  • --momentum (float): momentum

  • --weight-decay (float): weight dacay

  • --print-interval (int): training log print cycle

  • --cuda (bool)

  • --pretrained-model (bool): hether to use the pretrained model

Todo

  • Distributed SGD
  • ImageNet experiment

Reference

About

Implementing MixNet: Mixed Depthwise Convolutional Kernels using Pytorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.8%
  • Shell 1.2%