Skip to content

AyyerLab/CuPADMAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CuPADMAN

Determine the in-plane orientation of photon-sparse images and merge to obtain transmission function of unknown mask.

This project includes a simple simulated data generator as well as a reconstruction program. Both programs use the CUDA-based cupy replacement to numpy to perform the reconstructions. The goal is to keep the code short and easy to comprehend by using as many library functions as possible.

Note: Turns out the code's much faster with custom kernels, so some simplicity has been sacrificed for speed.

Installation

This is a pure python 3 package, which has the following dependencies:

  • cupy, which in turn needs CUDA
  • numpy
  • h5py
  • mpi4py (for multiple GPUs)

Features

The reconstruction program implements the EMC algorithm [1] to determine the orientations with a Poisson noise model. It currently runs on a single CUDA-capable GPU. The only unknown is the orientation, and the simulated data has pure Poisson noise.

This corresponds to a real experiment [2] to demonstrate the noise-tolerance of both the EMC algorithm and the X-ray detector used to collect the data. The data from that experiment is available on the CXIDB ID 18.

The data format convention used in this project are inspired by the Dragonfly [3] repository.

Usage

To perform a quick simulation, run the following commands:

$ mkdir data/
$ ./make_data.py
$ ./emc.py 10

If you have access to multiple GPUs, you can parallelize the reconstruction using MPI. Before one can run the reconstruction, you have to create a 'devices' file which tells the program which GPU number each rank must run on. This is a simple text file with one number per line, referring to the GPU ID as seen by nvidia-smi.

$ mpirun -np 4 ./emc.py -d devices.txt 10

In order to see benefits from MPI, you should increase the number of frames and/or the number of rotational samples.

You can also reconstruct real experimental data (instructions here).

A sample config.ini file has been provided to specify simulation parameters. One can edit the data generation parameters in the [make_data] section and regenerate the frames using the same mask by doing

$ ./make_data.py -d

Future goals

These are some of the future enhancements we would like to achieve:

  • Include shot-by-shot incident fluence variations, and recover them [DONE (trivially) 6e642fe]
  • Include non-uniform background in simulated data and ability to incorporate that information
  • Scale to multiple GPUs, first on same node, but later across nodes [DONE 1078a58]
  • Scale to large data sets and fine orientation sampling without running out of memory [DONE 7728cb9]
  • Use CUDATextureObject API for faster rotations (needs modification of cupy)

References

  1. Loh, Ne-Te Duane, and Veit Elser. "Reconstruction algorithm for single-particle diffraction imaging experiments." Physical Review E 80, no. 2 (2009): 026705.
  2. Philipp, Hugh T., Kartik Ayyer, Mark W. Tate, Veit Elser, and Sol M. Gruner. "Solving structure with sparse, randomly-oriented x-ray data." Optics express 20, no. 12 (2012): 13129-13137.
  3. Ayyer, Kartik, T-Y. Lan, Veit Elser, and N. Duane Loh. "Dragonfly: an implementation of the expand–maximize–compress algorithm for single-particle imaging." Journal of applied crystallography 49, no. 4 (2016): 1320-1335.

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages