#NeuralStyleTransfer
A Chainer implementation of A Neural Algorithm of Artistic Style. In short, this is an algorithm that transfers the artistic style of one image onto another with the help from a convolutional neural network.
To run this code you need Chainer:
pip install chainer
Versions tested are v1.14.0 and v1.18.0, should work with any versions inbetween.
A VGG-19 Caffe model is also required for this implementation to work. You can use the normalized version used by the authors of the article: vgg_normalised.caffemodel
, or use the original one made by VGG team: VGG_ILSVRC_19_layers.caffemodel
.
With minor modifications, other CNNs (e.g. NIN, GoogLeNet) can be used as well. See jcjohnson's explanation and mattya's implementation.
A helper function generate_image()
was created to do the transfer. Simply call it in main()
to generate images. An example is given below:
def main():
# set up global flag to run this program on gpu
use_gpu(True)
# instantiate a VGG19 model object
cnn = VGG19()
# begin transfer style
generate_image(cnn, 'content.jpg', 'style.jpg', alpha=150.0, beta=12000.0,
init_image='noise', optimizer='rmsprop', iteration=1600, lr=0.25, prefix='temp')
The parameters generate_image()
uses are:
cnn
: A CNN model object. Currently only VGG19 is implemented.content
,style
: Strings. Filenames of the content and style images to use.alpha
,beta
: Floats. Weighting factors for content and style reconstruction.color
: String. Scheme of color preserving to use, choose betweennone
(no color preserving),historgram
(for histogram matching), andluminance
(for luminance-only transfer).a
: Boolean. Whether to match the luminance channel of the style image to the content image before transfering, only work ifcolor
isluminance
of course.init_image
: String. Choose betweennoise
,content
, andstyle
.optimizer
: String. Optimizer to use, you can choose betweenadam
(ADAM) andrmsprop
(Alex Graves’s RMSprop).iteration
: Int, number of iterations to run.lr
: Float. Learning rate of the optimizer.save
: Int. The optimizer will write an output to file after everysave
iterations.filename
: String. Prefix of filename when saving output. The saved files will have name in formatprefix_iterations
.contrast
: Boolean. Wether to boost the contrast when saving the output. Default toTrue
. Sometimes the output has less saturation than expected, so I insert few lines to give the contrast a kick when saving to file.
Here we demostrate the effect of transformation using a photo (original file) of Grainger Engineering Library at UIUC. Titles of the artwork we used are also given:
Original Image | The Shipwreck of the Minotaur |
De Sterrennacht | Der Schrei |
Femme Nue Assise | Композиция 7 |
夢入靑城天下幽人間仙境圖 | 神奈川沖浪裏 |
under construction ᕕ( ᐛ )ᕗ
Transfer without Color Preserving | Transfer with histogram matching |
Luminance-only transfer, histogram matched | Luminance-only transfer |
Leon A. Gatys, Alexander S. Ecker & Matthias Bethge (2015). A Neural Algorithm of Artistic Style. In: CoRR.
Leon A. Gatys, Matthias Bethge, Aaron Hertzmann & Eli Shechtman (2016). Preserving Color in Neural Artistic Style Transfer. In: CoRR.
Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann & Eli Shechtman (2016). Controlling Perceptual Factors in Neural Style Transfer. In: CoRR.
Gatys, Leon A., Ecker, Alexander S. & Bethge, Matthias (2016). Image Style Transfer Using Convolutional Neural Networks. In: The IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414-2423.
F. Pitie & A. Kokaram (2007). The linear Monge-Kantorovitch linear colour mapping for example-based colour transfer. In: IETCVMP. 4th European Conference on Visual Media Production, 2007, pp. 1-9.
In making this program, I referred to helpful works of jcjohnson, apple2373, mattya, and andersbll.
Francis Hsu, University of Illinois at Urbana–Champaign.