Skip to content

viscom-ulm/GINN

Repository files navigation

Deep-learning the Latent Space of Light Transport

Created by Pedro Hermosilla, Sebastian Maisch, Tobias Ritschel, Timo Ropinski.

teaser

This repository contains the code of our EGSR paper, Deep-learning the Latent Space of Light Transport. A video of our method can be found in the following link.

Citation

If you find this code useful please consider citing us:

@article{hermosilla2018ginn,
    title={Deep-learning the Latent Space of Light Transport},
    author={Pedro Hermosilla and Sebastian Maisch and Tobias Ritschel and Timo Ropinski },
    journal={Computer Graphics Forum (Proc. EGSR 2019)}
}

Pre-requisites

numpy
pygame
CUDA 9.0
tensorflow 1.12

Installation

The first step is downloading the code for Monte Carlo Convolutions (MCCNN) in the following link. The software expects the code to be in a folder named MCCNN. The second step is following the instruction on the README from MCCNN to compile the library.

Real-time Viewer

Modify the compiling script in folder rt_viewer/cuda_ops with your cuda and python3 paths. Then, execute the compiling script to generate the CUDA/OpenGL operations. Lastly execute the scripts rt_viewer/sss.sh or rt_viewer/gi.sh to use the trained networks to visualize two 3D models.

Training

In order to train a network on our datasets first download the data from the following link (COMMING SOON). Then, execute the script processData.py to generate the numpy files. Lastly, execute the command:

python GITrainRT.py --useDropOut --useDropOutConv --augment --dataset 2

The parameter dataset determines which effect the network should be trained on.

About

Deep-learning the Latent Space of Light Transport

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published