Skip to content

the tensorflow code of "autoencoding beyond pixels using a learned similarity metric"

Notifications You must be signed in to change notification settings

gegetang/vae-gan-tensorflow

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VAE/GAN

the tensorflow code of Autoencoding beyond pixels using a learned similarity metric

The paper should be the first one to combine the Variational Autoencoder(VAE) and Generative Adversarial Networks(GAN), by using the discrimiator of GAN as the perceptual loss instead of the pixel-wise loss in the original VAE. VAE/GAN also can be used for image reconstruction and visual attribution manipulation.

About training instability

I also found the training is very instability. So, I update the code to stablize the adversarial progress of VAE/GAN. The details is in the below.

Pretrained models.

The checkpoints files can be downloads from Google Drive. Please unzip the files inside the project directory. Later, I will update the new models after more training iterations.

Prerequisites

  • tensorflow >=1.4

dataset requirement

You can download the CelebA dataset and unzip CelebA into a directory. Noted that this directory don't contain the sub-directory.

Usage

Train:

$ python main.py --op 0 --path your data path

Test:

$ python main.py --op 1 --path your data path

Experiments visual result

Input:

Reconstruction

Issue

If you find the bug and problem, Thanks for your issue to propose it.

Reference code

DCGAN

autoencoding_beyond_pixels

About

the tensorflow code of "autoencoding beyond pixels using a learned similarity metric"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%