import networkx as nx from nilearn.plotting import plot_stat_map, view_img_on_surf from bids import BIDSLayout, BIDSValidator import nibabel as nib base_dir = '/Users/lukechang/Dropbox/Dartbrains' data_dir = os.path.join(base_dir, 'data', 'Localizer') layout = BIDSLayout(data_dir, derivatives=True) Now let's load an example participant's preprocessed functional data. sub = 'S01' fwhm=6 data = Brain_Data(layout.get(subject=sub, task='localizer', scope='derivatives', suffix='bold', extension='nii.gz', return_type='file')[0]) smoothed = data.smooth(fwhm=fwhm) Next we need to pick an ROI. Pretty much any type of ROI will work. In this example, we will be using a whole brain parcellation based on similar patterns of coactivation across over 10,000 published studies available in neurosynth (see this paper for more [details](http://cosanlab.com/static/papers/delaVega_2016_JNeuro.pdf)). We will be using a parcellation of 50 different functionally similar ROIs. mask = Brain_Data('https://neurovault.org/media/images/8423/k50_2mm.nii.gz') mask.plot() Each ROI in this parcellation has its own unique number. We can expand this so that each ROI becomes its own binary mask using `nltools.mask.expand_mask`. Let's plot the first 5 masks. mask_x = expand_mask(mask)
from nltools.plotting import component_viewer base_dir = '../data/localizer/derivatives/preproc/fmriprep' base_dir = '/Users/lukechang/Dropbox/Dartbrains/Data/preproc/fmriprep' sub = 'S01' data = Brain_Data(os.path.join(base_dir, f'sub-{sub}','func', f'sub-{sub}_task-localizer_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz')) ## More Preprocessing Even though, we have technically already run most of the preprocessing there are a couple of more steps that will help make the ICA cleaner. First, we will run a high pass filter to remove any low frequency scanner drift. We will pick a fairly arbitrary filter size of 0.0078hz (1/128s). We will also run spatial smoothing with a 6mm FWHM gaussian kernel to increase a signal to noise ratio at each voxel. These steps are very easy to run using nltools after the data has been loaded. data = data.filter(sampling_freq=1/2.4, high_pass=1/128) data = data.smooth(6) ## Independent Component Analysis (ICA) Ok, we are finally ready to run an ICA analysis on our data. ICA attempts to perform blind source separation by decomposing a multivariate signal into additive subcomponents that are maximally independent. We will be using the `decompose()` method on our `Brain_Data` instance. This runs the [FastICA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.fastica.html) algorithm implemented by scikit-learn. You can choose whether you want to run spatial ICA by setting `axis='voxels` or temporal ICA by setting `axis='images'`. We also recommend running the whitening flat `whiten=True`. By default `decompose` will estimate the maximum components that are possible given the data. We recommend using a completely arbitrary heuristic of 20-30 components. tr = 2.4 output = data.decompose(algorithm='ica', n_components=30, axis='images', whiten=True) ## Viewing Components We will use the interactive `component_viewer` from nltools to explore the results of the analysis. This viewer uses ipywidgets to select the `Component` to view and also the threshold. You can manually enter a component number to view or scroll up and down.