Skip to content

zhnzhang61/NeuralStyleTransfer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

#NeuralStyleTransfer

Description

A Chainer implementation of A Neural Algorithm of Artistic Style. In short, this is an algorithm that transfers the artistic style of one image onto another with the help from a convolutional neural network.

Detail

Requirement

To run this code you need Chainer:

pip install chainer

Versions tested are v1.14.0 and v1.18.0, should work with any versions inbetween.

A VGG-19 Caffe model is also required for this implementation to work. You can use the normalized version used by the authors of the article: vgg_normalised.caffemodel, or use the original one made by VGG team: VGG_ILSVRC_19_layers.caffemodel.

With minor modifications, other CNNs (e.g. NIN, GoogLeNet) can be used as well. See jcjohnson's explanation and mattya's implementation.

Usage

A helper function generate_image() was created to do the transfer. Simply call it in main() to generate images. An example is given below:

def main():
    # set up global flag to run this program on gpu
    use_gpu(True)
    
    # instantiate a VGG19 model object
    cnn = VGG19()
    
    # begin transfer style
    generate_image(cnn, 'content.jpg', 'style.jpg', alpha=150.0, beta=12000.0,
                   init_image='noise', optimizer='rmsprop', iteration=1600, lr=0.25, prefix='temp')

Parameters

The parameters generate_image() uses are:

  • cnn: A CNN model object. Currently only VGG19 is implemented.
  • content, style: Strings. Filenames of the content and style images to use.
  • alpha, beta: Floats. Weighting factors for content and style reconstruction.
  • color: String. Scheme of color preserving to use, choose between none (no color preserving), historgram (for histogram matching), and luminance (for luminance-only transfer).
  • a: Boolean. Whether to match the luminance channel of the style image to the content image before transfering, only work if color is luminance of course.
  • init_image: String. Choose between noise, content, and style.
  • optimizer: String. Optimizer to use, you can choose between adam (ADAM) and rmsprop (Alex Graves’s RMSprop).
  • iteration: Int, number of iterations to run.
  • lr: Float. Learning rate of the optimizer.
  • save: Int. The optimizer will write an output to file after every save iterations.
  • filename: String. Prefix of filename when saving output. The saved files will have name in format prefix_iterations.
  • contrast: Boolean. Wether to boost the contrast when saving the output. Default to True. Sometimes the output has less saturation than expected, so I insert few lines to give the contrast a kick when saving to file.

Result

Here we demostrate the effect of transformation using a photo (original file) of Grainger Engineering Library at UIUC. Titles of the artwork we used are also given:

grainger the_shipwreck_of_the_minotaur
Original Image The Shipwreck of the Minotaur
starry_night der_schrei
De Sterrennacht Der Schrei
femme_nue_assise composition_VII
Femme Nue Assise Композиция 7
dreamland_of_mountain_chingcherng_in_heavenly_place kanagawa-oki_nami_ura
夢入靑城天下幽人間仙境圖 神奈川沖浪裏

Color Preserving Transfer

under construction ᕕ( ᐛ )ᕗ

starry_night_over_the_rhone histogram
Transfer without Color Preserving Transfer with histogram matching
lum_match lum_no_match
Luminance-only transfer, histogram matched Luminance-only transfer

Reference

Leon A. Gatys, Alexander S. Ecker & Matthias Bethge (2015). A Neural Algorithm of Artistic Style. In: CoRR.

Leon A. Gatys, Matthias Bethge, Aaron Hertzmann & Eli Shechtman (2016). Preserving Color in Neural Artistic Style Transfer. In: CoRR.

Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann & Eli Shechtman (2016). Controlling Perceptual Factors in Neural Style Transfer. In: CoRR.

Gatys, Leon A., Ecker, Alexander S. & Bethge, Matthias (2016). Image Style Transfer Using Convolutional Neural Networks. In: The IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414-2423.

F. Pitie & A. Kokaram (2007). The linear Monge-Kantorovitch linear colour mapping for example-based colour transfer. In: IETCVMP. 4th European Conference on Visual Media Production, 2007, pp. 1-9.

Acknowledgement

In making this program, I referred to helpful works of jcjohnson, apple2373, mattya, and andersbll.

Author

Francis Hsu, University of Illinois at Urbana–Champaign.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%