Skip to content

NeuroanatomyAndConnectivity/surfaceProcessing

Repository files navigation

surfaceProcessing

Package with scripts for running surface level comparison of morphometry

##Installation download a copy of the repository (or clone it). If the files are packed in a
zip file (i.e. if you haven't cloned the repo), unpack them using tar -xzfv.
Then, add the following line to the end of your .bashrc file

export PYTHONPATH=/Path/To/The/Downloaded/Files

While replacing '/Path/To/The/Downloaded/Files' with the actual absolute
path to the directory where the package folder is stored. For example, if you unpacked your files in /home/brainmaster/packages/surfaceProcessing then you should use

export PYTHONPATH=/home/brainmaster/packages

This will allow python to find the package and load all it's functions as long as you start python under your user account.

In addition to the files in this repository, please also download the files
in https://github.com/NeuroanatomyAndConnectivity/glyphsets
From there, choose whichever runGradientFor###.sh file is appropriate for
your usage scenario and add put the absolute path to it in the configure.py
file.

##Usage Make sure that all dependencies are available when executing the script and when starting the condor process. This may mean that you have to call the condor command from a network accessible directory and also ensure that the files that you are running on are accessible from the network. Otherwise, the processing nodes may not be able to access your files and as a consequence the condor process may crash.

###Condor usage Before starting the condor process make sure that AFNI, connectome workbench
and freesurfer binaries are available in your current terminal environment (e.g. by typing AFNI, freesurfer).

To submit your job order to condor, go to the condorDir and look for the appropriate .submit file that you would like to run. Then type

condor_submit nameOfCondorFile.submit

to submit the process to condor. You can type

condor_q [-global]

to check the processes (the -global flag is optional and will show an overview of all currently running processes on the condor server).

Start using the scripts by executing

python wrapper.py

and reading the help file. An example subject list is located at
/scr/kansas1/surchs/testSurface/subjects.txt and this path is also ideal for
testing.

If you would like to generate your own subject list, the nifti files are
located here:

/scr/melisse1/NKI_enhanced/results/

###Processing This is a list of the score options currently implemented for the surface processing
'pearsonr': pearson correlation between gradient and overlay
'spearmanr': spearman correlation between gradient and overlay
'zpear': z-score of correlation p-value of pearson correlation
'zspear': z-score of correlation p-value of spearman correlation

###Label-wise processing As an additional usage scenario there is the option of computing the average functional-morphometric relationship for all cortical labels defined in the freesurfer segmentation. The only thing you have to do to run your analysis on these labels is to set the environment variable

doLabel = True

everything else will be taken care of.

Specifically, a wrapper will be called to extract the annotation file of the specified template into individual label files to a default location. From there all cortical labels will be read in and processed. Technically, you can replace these labels with your own labels - however at the moment there is no safeguard in place to ensure that labels are non-overlapping. If they were, bad things are going to happen.

If you decide to use your own labels - or just want to exclude some of the ones that are currently in use, have a look at the label file that gets generated by the wrapper. It is simply a text file containing the absolute paths to all extracted label files. Each line represents a label for one hemisphere. To use custom labels change the default path for the label file to something that you created yourself or change the existing label file itself.

The group analysis is performed on the vertex level in every case as computation time for the group analysis is reasonably short so that label-wise analysis would probably not speed things up too much.

##Dependencies The gradient scripts used to generate the gradients depend on:

The python modules need a couple of packages that usually come with every
larger python distribution (like EPD) but might need to install:

If you want to go for all packages individually then you should have access to:

About

Package with scripts for running surface level comparison of morphometry

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published