Skip to content

YJingyu/GDN_Inpainting

 
 

Repository files navigation

GDN_Inpainting

An open source code for paper "Pixel-wise Dense Detector for Image Inpainting" (PG 2020)

Deep inpainting technique fills the semantically correct and visually plausible contents in the missing regions of corrupted images. Above results are presented by our framework.

Prerequisites

  • Ubuntu 16.04
  • Python 3
  • NVIDIA GPU CUDA + cuDNN
  • TensorFlow 1.12.0

Usage

Set up

  • Clone this repo:
git clone https://github.com/Evergrow/GDN_Inpainting.git
cd GDN_Inpainting

Training

  • Modify gpu id, dataset path, mask path, and checkpoint path in the config file. Adjusting some other parameters if you like.
  • Run python train.py and view training progress tensorboard --logdir [path to checkpoints]

Testing

Choose the input image, mask and model to test:

python test.py --image [input path] --mask [mask path] --output [output path] --checkpoint_dir [model path]

Pretrained models

Celeba-HQ and Places2 pretrained models are released for quick testing. Download the models using Google Drive links and move them into your ./checkpoints directory.

About

Pixel-wise Dense Detector for Image Inpainting (PG 2020)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%