Skip to content

unix0000/Colorful-Image-Colorization

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Colorizing Images

A deep learning approach to colorizing images, specifically for Pokemon.

The current model was trained on screenshots taken from Pokemon Silver, Crystal, and Diamond, then tested on Pokemon Blue Version. Sample results below.

test_1 test_1

Basic Training Usage

python train.py --help

-c --checkpoint_dir <str> [path to save the model]

-b --batch_size <int> [batch size]

-d --data_dir <str> [path to root image folder]

-n --normalize <str> [y/n normalize training images]

You can use this with some sample training images provided in images/train. Run python train.py -c model_dir/ -b 5 -d ../images/train/ -n n to start training on the small amount of sample images. This will create a directory called model_dir in the train folder. If you get an error about CUDA running out of memory, reduce the batch size.

The files in the images/train folder are as follows:

  • image_1.png: The original image extracted from the video (after possible cropping)
  • image_1_resized.png: The original image resized to (160,144).
  • image_1_resized_gray.png: The original image resized to (160,144) and converted to grayscale.

The training attempts to obtain the resized color image when given the resized gray image.

Evaluating on Images

I've included a trained model in the models/ directory that you can run your own images on. You can either run the model on one image or a folder of images. For one image, run eval_one.py and pass it the model and the image as parameters. To run it on multiple images, run eval.py and pass it the model and the folder to the images. eval.py will save your images in the output folder, where as eval_one.py will save them in the current directory. Examples:

python eval_one.py ../models/generation_2/ my_image.png

python eval.py ../models/generation_2/ ../images/testing/

Training your own data

There are scripts included to help create your own dataset, which is desirable because the amount of data needed to obtain good results is a good amount. The results below were trained on about 50,000 images.

The easiest method to obtain images is to extract them from Youtube walkthrough videos of different games. Given that you have a folder with videos

videos/

video_1.mp4

video_2.mp4

...

use extract_frames.sh to extract images from each video. Just pass it the folder containing images.

Depending on if the video had a border around the game, you may need to use crop_images.py to crop out the border. There are comments in the script you can uncomment to view the image before it crops all of them to be sure the cropping is correct.

Finally, use convert_images.py to create resized and gray images for training.

Results

test_1 test_1

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

test_2 test_2

About

A deep learning approach to colorizing images

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.2%
  • Shell 5.8%