Skip to content

Chainer implementation of "Perceptual Losses for Real-Time Style Transfer and Super-Resolution".

Notifications You must be signed in to change notification settings

Hiyorimi/chainer-fast-neuralstyle

 
 

Repository files navigation

Chainer implementation of "Perceptual Losses for Real-Time Style Transfer and Super-Resolution"

Fast artistic style transfer by using feed forward network.

  • input image size: 512x384
  • process time(CPU): 1.954sec (Core i7-5930K)
  • process time(GPU): 0.398sec (GPU TitanX)

Requirement

$ pip install chainer

Prerequisite

Download VGG16 model and convert it into smaller file so that we use only the convolutional layers which are 10% of the entire model.

sh setup_model.sh

Train

Need to train one image transformation network model per one style target. According to the paper, the models are trained on the Microsoft COCO dataset.

python train.py -s <style_image_path> -d <training_dataset_path> -g 0

Generate

python generate.py <input_image_path> -m <model_path> -o <output_image_path>

This repo has a pretrained model which was trained with "The starry night" by "Vincent van Gogh" as an example.

  • example:
python generate.py sample_images/tubingen.jpg -m models/starrynight.model -o sample_images/output.jpg

Difference from paper

  • Convolution kernel size 4 instead of 3.
  • Not sure whether adding/subtracting mean image is needed or not. In this implementation mean image subtraction is done before input image is fed into "image transformation network".

License

MIT

Reference

Codes written in this repository based on following nice works, thanks to the author.

  • chainer-gogh Chainer implementation of neural-style. I heavily referenced it.
  • chainer-cifar10 Residual block implementation is referred.

About

Chainer implementation of "Perceptual Losses for Real-Time Style Transfer and Super-Resolution".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.4%
  • Shell 1.6%