Skip to content

MPI+CUDA implementation of sharpening kernel on Chandra X-Ray satellite data (NASA open source)

Notifications You must be signed in to change notification settings

B-Rich/nasa_chandra

 
 

Repository files navigation

Danny Gibbs
Harvard-Smithsonian Center for Astrophysics
dgibbs@head.cfa.harvard.edu 

Christopher Lee
Harvard University
cklee@college.harvard.edu 

CS 205 
Computing Foundations for Computational Science
Final Project
4 December 2011

Project Title:
Parallelized Adaptive Smoothing for Chandra X-Ray Observatory Datasets

Our project consists of several parts:
1. Data
	The input png files are included in the XXXXXX directory
2. Serial Implemenation
	a) Output normalized by sum
		File Name: "image_adapt_serial_norm.py"
	b) Output normalized by weighted sum
		File Name: "image_adapt_serial_gaussian.py"
3. GPU Implemenation
	a) Global Memory
		File Name: "image_adapt_gpu_global.py"
	b) Shared Memory
		File Name: "image_adapt_gpu.py"
4. MPI Implementation
	File Name: "image_adapt_mpi.py"

Notes:

Software:
	For our project we chose to use Python for our implementation.  
	For Python installation instructions, please visit:
	http://www.python.org/getit/releases/2.7.2/

	Python Modules:
	Throughout our project we used several Python Modules:
	These modules are addressed by installing SciPy, for instructions on installing
	SciPy please visit:
	http://www.scipy.org/Installing_SciPy

	In order to achieve our parallel computational needs, install the following:
	MPI4Py
	http://mpi4py.scipy.org/
	PyCUDA
	http://mathema.tician.de/software/pycuda


1. Data
	The included data sets contain the following Chandra Observations
	a. OBSID 114 - CAS A (http://en.wikipedia.org/wiki/Cassiopeia_A)
	b. OBSID 7417 - Cluster in NGC 2071
	c. OBSID 321 - NGC 4472 = M49 (Galaxy in Virgo)
	d. OBSID 11759 - J0417.5-1154 (Strong lensing cluster)

2. Serial Implemenation
	How To Run
	
	In order to run the serial version we ended up having to hard code some of the user defined values as there seemed to be an issue reading in command line arguments on the Resonance Cluster.
So near the top of the file is the 'file_name' variable we use to to specify which image we wish to run through the algorithm.
We then have our 'Threshold' and 'MaxRad' variables. These are the user defined variables that specify the value of the pixels in the created box, Threshold, and the maximum size box to attempt to reach the threshold value, MaxRad.
The output image is created in the same directory as the input image. We remove the extension of the input file and then append '_smoothed_serial.png' to name.
While running our program on the Resonance cluster from the command line:
%: gpu-login
%: cd to base directory of our project
%: python image_adapt_serial_norm.py 


3. GPU Implementation
	How To Run

    Aforementioned Threshold and MaxRad parameters apply in GPU version. Change Debug flag to 0 or 1 to produce gpu_output.txt to compare with serial_output.txt. Relative error is outputted in rel_error.txt. Make sure all file locations are correct.

%: gpu-login 
%: module load packages/pycuda/2011.1.2
%: python image_adapt_gpu_global.py			# to run global memory
%: python image_adapt_gpu.py				# to run shared memory


4. MPI Implementation
	How To Run

%: mpiexec -n 5 python image_adapt_mpi.py	# to run MPI/CUDA
%: python rel_error                         # to check for error

About

MPI+CUDA implementation of sharpening kernel on Chandra X-Ray satellite data (NASA open source)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published