Skip to content

shanibenb/PIxelBrush

Repository files navigation

PIxelBrush

Intoduction

This is a PyTorch implementaion for a tool called PixelBrush that generates an artistic image using a given description. This implementation based on Jaile's work and a PyTorch implemention of Generative Adversarial Text-to-Image Synthesis paper.

Requirements

  • pytorch
  • visdom
  • h5py
  • PIL
  • numpy
  • skip-thoughts

Dataset

We used the images from The Oxford Paintings Dataset. We creating descriptions using Neuraltalk2.

The images names and descriptions we use are in data/oxford/vis_oxford.json file.

Training

  1. Create text embedding using skip-thoughts, and the file create_embedding_oxford_uniskip_biskip.py. (You can skip this step by using the files in data/oxford).
  2. Download the oxford dataset.
  3. Create hd5 dataset using convert_oxford_to_hd5_script.py (You can skip this step by downloading hd5 files from drive).
  4. run runtime.py

Testing

  1. Download hd5 files from drive.
  2. run runtime.py with the following parameters:
  • type: choose an architecture (simple | normal | deep)
  • inference = True
  • split = 10
  • pre_trained_gen = "pre-trained-models/{}_200.pth"

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages