Skip to content

JcYBalaBalA/GAN-Leaks

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GAN-Leaks

LICENSE Python

This repository contains the implementation for "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)

Data Preparation

Download the CelebA (aligned and cropped face) dataset and split the data into disjoint training (positive query), testing (negative query) and reference set. Save the png images into separate folders.

Requirements

We provide a Tensorflow (used for pggan, wgangp and dcgan) and a Pytorch (used for vaegan) implementation of our attack models. The environments can be set up by using Anaconda with the following commands.

  • Tensorflow:

    conda create --name ganleaks-tf python=3.6
    conda activate ganleaks-tf
    conda install tensorflow-gpu=1.14.0
    pip install six tqdm pillow matplotlib scikit-learn
  • Pytorch

    conda create --name ganleaks-pytorch python=3.6
    conda activate ganleaks-pytorch
    conda install pytorch=1.2.0
    conda install torchvision -c pytorch
    pip install six tqdm pillow matplotlib scikit-learn scikit-image

GAN Models

We pre-train the following victim GAN Models.

  • pggan (Progressive Growing of GAN)

    • Requirements:
      activate Tensorflow environment

      conda activate ganleaks-tf
    • Pre-processing data:

      cd gan_models/pggan
      python dataset_tool.py create_celeba_subset \
      "Directory for saving the output .tfrecords files" \
      "Training data directory containing the png images"
    • Training:
      adjust the arguments (e.g. data_dir, dataset) in pggan/config.py

      cd gan_models/pggan
      python run.py
    • Generating samples:

      cd gan_models/pggan
      python run.py --app gen --model_path "Path to the model .pkl file"
  • wgangp (Wasserstein GAN with Gradient Penalty)

    • Requirements:
      activate Tensorflow environment

      conda activate ganleaks-tf
    • Training:

      cd gan_models/wgangp
      python train.py --data_dir "Directory of the training data"
    • Generating samples:

      cd gan_models/wgangp
      python sample.py --model_dir "Directory of the model checkpoints"  
  • dcgan (Deep Convolutional GAN)

    • Requirements:
      activate Tensorflow environment

      conda activate ganleaks-tf
    • Training:

      cd gan_models/dcgan
      python main.py --data_dir "Directory of the training data" 
    • Generating samples:

      cd gan_models/dcgan
      python main.py --app gen --checkpoint_dir "Directory of the model checkpoints"
  • vaegan (VAE GAN)

    • Requirements:
      activate Pytorch environment

      conda activate ganleaks-pytorch
    • Training:

      cd gan_models/vaegan
      python train.py --data_dir "Directory of the training data"
    • Generating samples:

      cd gan_models/vaegan
      python sample.py --model_dir "Directory of the model checkpoints"

Attack Models

  • Full black-box attack (fbb):

    1. Generate samples
    2. Run attack:
    cd attack_models
    python fbb.py \
    -name "Name of the output folder" \
    -posdir "Directory of the positive (training) query png images" \
    -negdir "Directory of the negative (testing) query png images" \
    -gdir "Directory of the generated.npz file (default=model_dir)"
  • Partial black-box attack (pbb):

    cd attack_models
    python pbb_***.py \
    -name "Name of the output folder" \
    -posdir "Directory of the positive (training) query png images" \
    -negdir "Directory of the negative (testing) query png images" \ 
    -gdir "Directory of the gan model checkpoints" \
    -init nn \
    -ndir "Directory of the fbb results" 

    pbb_***.py should be selected according to the type of victim gan model.

  • White-box attack (wb):

    cd attack_models
    python wb_***.py \ 
    -name "Name of the output folder" \
    -posdir "Directory of the positive (training) query png images" \
    -negdir "Directory of the negative (testing) query png images" \ 
    -gdir "Directory of the gan model checkpoints" 

    wb_***.py should be selected according to the type of victim gan model.

  • Attack calibration:
    Train a reference model and perform the attacks on the reference model. Evaluate the calibrated attack by providing the "-rdir" argument (see below).

Evaluation

To compute the AUC ROC value and plot the ROC curve:

cd attack_models/tools
python eval_roc.py --attack_type "fbb/pbb/wb" \
-ldir "Directory of the attack results" \
-rdir "Directory of the attack results on reference model (optional)"
-sdir "Directory for saving the evaluation results (optional)" 

Pre-trained Models

Pre-trained victim model checkpoints can be downloaded using the links below. Specifically, the victim models are trained on 20k images with identity-based split. The selected images can be found here, with "identity index"(first column) and "image file name"(second column).

pggan wgangp dcgan vaegan
victim model link link link link

Citation

@inproceedings{chen20ccs,
author = {Dingfan Chen and Ning Yu and Yang Zhang and Mario Fritz},
title = {GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models},
booktitle = {ACM Conference on Computer and Communications Security (CCS)},
year = {2020}
}

Acknowledgements

Our implementation uses the source code from the following repositories:

About

Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%