Skip to content

dreamplus1989/yolov3-plus_PyTorch

Repository files navigation

yolov3-plus_PyTorch

A better PyTorch version of YOLOv3.

I call it "YOLOv3-plus"~

It is not the final version. I'm still trying to make it better and better.

A strong YOLOv3 PyTorch

In this project, you can enjoy three basic detectors:

  • yolo-v3-spp
  • yolo-v3-plus
  • yolo-v3-slim

What's more, I also provide more excellent detector (still be training...) with several kinds of CSPDarknet:

  • yolo-v3-plus-large (with CSPDarknet-large)
  • yolo-v3-plus-medium (with CSPDarknet-medium)
  • yolo-v3-plus-small (with CSPDarknet-small)
  • yolo-v3-slim-csp (with CSPDarknet-slim / CSPDarknet-tiny )

Of course, the CSPDarknets used in these new models are all trained by myself on ImageNet. My CSPDarknet is a little different from the one used in YOLOv4 and YOLOv5. I referred to YOLOv4, YOLOv5 and Scaled-YOLOv4. For more details, you can read my backbone files in backbone\cspdarknet.py.

YOLOv3-SPP

I try to reproduce YOLOv3 with SPP module.

VOC:

size Ours
VOC07 test 416 81.6
VOC07 test 608 82.5

COCO:

data AP AP50 AP75 AP_S AP_M AP_L
YOLOv3-SPP-320 COCO test-dev 31.7 52.6 32.9 10.9 33.2 48.6
YOLOv3-SPP-416 COCO test-dev 34.6 56.1 36.3 14.7 36.2 50.1
YOLOv3-SPP-608 COCO test-dev 37.1 58.9 39.3 19.6 39.5 48.5

So, just have fun !

YOLOv3-Plus

I add PAN module into the above YOLOv3-SPP, and get a better detector:

On COCO eval:

data AP AP50
YOLOv3-SPP-416 COCO eval 37.40 57.42
YOLOv3-SPP-608 COCO eval 40.02 60.45

YOLOv3-Slim

I also provide a lightweight detector: YOLOv3-Slim.

It is very simple. The backbone network, darknet_tiny, consists of only 10 conv layers. The neck is SPP same to as the one used in my YOLOv3-Plus. And the head is FPN+PAN with less conv layers and conv kernels.

COCO eval:

data AP AP50
YOLOv3-Slim-416 COCO eval 26.08 45.65
YOLOv3-Slim-608 COCO eval 26.85 47.58

More better YOLOv3-Plus and YOLOv3-Slim detectors

On COCO (please hold on ...)

data size AP AP50 AP75 AP_S AP_M AP_L
YOLOv3-Plus-large COCO test-dev 320
YOLOv3-Plus-large COCO test-dev 416
YOLOv3-Plus-large COCO test-dev 608
YOLOv3-Plus-medium COCO test-dev 320
YOLOv3-Plus-medium COCO test-dev 416
YOLOv3-Plus-medium COCO test-dev 608
YOLOv3-Plus-small COCO test-dev 320
YOLOv3-Plus-small COCO test-dev 416
YOLOv3-Plus-small COCO test-dev 608
YOLOv3-Slim-csp COCO test-dev 320
YOLOv3-Slim-csp COCO test-dev 416
YOLOv3-Slim-csp COCO test-dev 608

Installation

  • Pytorch-gpu 1.1.0/1.2.0/1.3.0
  • Tensorboard 1.14.
  • opencv-python, python3.6/3.7

Dataset

As for now, I only train and test on PASCAL VOC2007 and 2012.

VOC Dataset

I copy the download files from the following excellent project: https://github.com/amdegroot/ssd.pytorch

I have uploaded the VOC2007 and VOC2012 to BaiDuYunDisk, so for researchers in China, you can download them from BaiDuYunDisk:

Link:https://pan.baidu.com/s/1tYPGCYGyC0wjpC97H-zzMQ

Password:4la9

You will get a VOCdevkit.zip, then what you need to do is just to unzip it and put it into data/. After that, the whole path to VOC dataset is data/VOCdevkit/VOC2007 and data/VOCdevkit/VOC2012.

Download VOC2007 trainval & test

# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # <directory>

Download VOC2012 trainval

# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # <directory>

MSCOCO Dataset

I copy the download files from the following excellent project: https://github.com/DeNA/PyTorch_YOLOv3

Download MSCOCO 2017 dataset

Just run sh data/scripts/COCO2017.sh. You will get COCO train2017, val2017, test2017.

Train

VOC

python train_voc.py -v [select a model] -hr -ms --cuda

You can run python train_voc.py -h to check all optional argument.

COCO

python train_coco.py -v [select a model] -hr -ms --cuda

Test

VOC

python test_voc.py -v [select a model] --trained_model [ Please input the path to model dir. ] --cuda

COCO

python test_coco.py -v [select a model] --trained_model [ Please input the path to model dir. ] --cuda

Evaluation

VOC

python eval_voc.py -v [select a model] --train_model [ Please input the path to model dir. ] --cuda

COCO

To run on COCO_val:

python eval_coco.py -v [select a model] --train_model [ Please input the path to model dir. ] --cuda

To run on COCO_test-dev(You must be sure that you have downloaded test2017):

python eval_coco.py -v [select a model] --train_model [ Please input the path to model dir. ] --cuda -t

You will get a .json file which can be evaluated on COCO test server.

You can run python train_voc.py -h to check all optional argument.

Releases

No releases published

Packages

No packages published

Languages