Skip to content
/ LIA Public
forked from genforce/lia

TensorFlow implementation of LIA

Notifications You must be signed in to change notification settings

suzhoushr/LIA

 
 

Repository files navigation

LIA (LIA: Latently Invertible Autoencoder with Adversarial Learning)

Requirements

  • Tensorflow (tested with v1.12.0)
  • Python3.6

Training

Decoder Training

Run

We just use an invertible network to replace the Mapping Network in StyleGAN, the remaining networks are all the same.

Run the training script with python train_decoder.py (For training details, please refer to StyleGAN. Here train_decoder.py is exactly the same script with StyleGan's train.py script. We use a different name just to distinguish it from training script in the second stage of LIA).

Encoder Training

Prepare for training Encoder

  1. Add dataset's path to data_train and data_test in (Data_dir).
  2. Add decoder's path to decoder_pkl (derived from the first-stage training) in (Decoder_pkl).

Run

python train_encoder.py

Using pre-trained networks

All pre-trained networks are available on Google Drive, or one could produce them by the training script. The weights are stored as Python PKL files, as StyleGAN does. The network weights contain 5 instances of dnnlib.tflib.Network, i.e. E, G, D, Gs, NE.

Path Description
LIA.pdf paper PDF.
ffhq_128x128 LIA trained with FFHQ dataset.
cat_128x128 LIA trained with LSUN Cat dataset.
bedroom_128x128 LIA trained with LSUN Bedroom dataset.
car_128x96 LIA trained with LSUN Car dataset.
boundaries Boundaries obtained by InterFaceGAN on FFHQ Dataset.

Testing

  1. Download the pre-trained network weights and the boundaries file.
  2. Prepare the test data, such as .png images.

Sampling

python test.py  --restore_path MODEL_PATH  --mode 0 --batch_size 16
image_sample0
sampling results on FFHQ
image_sample1
sampling results on LSUN bedroom
image_sample2
sampling results on LSUN cat
image_sample3
sampling results on LSUN car

Reconstruction

python test.py  --data_dir_test DATA_PATH  --restore_path MODEL_PATH  --mode 1 --batch_size 8
image_orin_ffhq_ori
image_orin_ffhq_rec
FFHQ
image_orin_bedroom_ori
image_orin_bedroom_rec
bedroom
image_orin_cat_ori
image_orin_cat_rec
cat
image_orin_car_ori
image_orin_car_rec
car

For each group images, the first row shows the original images and the second row shows the reconstructed images.

Interpolation

python test.py  --data_dir_test DATA_PATH  --restore_path MODEL_PATH  --mode 3
image_orin_ffhq_rec
image_orin_ffhq_rec
image_orin_ffhq_rec
image_orin_ffhq_rec

Manipulation

python test.py  --data_dir_test DATA_PATH  --boundaries BOUNDARY_PATH --restore_path MODEL_PATH  --mode 4
image_orin_ffhq_rec
image_orin_ffhq_rec
image_orin_ffhq_rec

Each row shows the original image, the reconstruction, glass, pose, gender, smile, and age.

##Reference

StyleGAN

ProGAN

Glow

About

TensorFlow implementation of LIA

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%