This is a PyTorch implementaion for a tool called PixelBrush that generates an artistic image using a given description. This implementation based on Jaile's work and a PyTorch implemention of Generative Adversarial Text-to-Image Synthesis paper.
- pytorch
- visdom
- h5py
- PIL
- numpy
- skip-thoughts
We used the images from The Oxford Paintings Dataset. We creating descriptions using Neuraltalk2.
The images names and descriptions we use are in data/oxford/vis_oxford.json file.
- Create text embedding using skip-thoughts, and the file create_embedding_oxford_uniskip_biskip.py. (You can skip this step by using the files in data/oxford).
- Download the oxford dataset.
- Create hd5 dataset using convert_oxford_to_hd5_script.py (You can skip this step by downloading hd5 files from drive).
- run runtime.py
- Download hd5 files from drive.
- run runtime.py with the following parameters:
- type: choose an architecture (simple | normal | deep)
- inference = True
- split = 10
- pre_trained_gen = "pre-trained-models/{}_200.pth"