Ejemplo n.º 1
0
conditions = [
    os.path.basename(x).split(f'sub-{sub}_')[1].split('_denoised')[0]
    for x in file_list
]
beta = Brain_Data(file_list)

# Next we will compute the pattern similarity across each beta image. We could do this for the whole brain, but it probably makes more sense to look within a single region. Many papers use a searchlight approach in which they examine the pattern similarity within a small sphere centered on each voxel. The advantage of this approach is that it uses the same number of voxels across searchlights and allows one to investigate the spatial topography at a relatively fine-scale. However, this procedure is fairly computationally expensive as it needs to be computed over each voxel and just like univariate analyses, will require stringent correction for multiple tests as we learned about in tutorial 12: Thresholding Group Analyses. Personally, I prefer to use whole-brain parcellations as they provide a nice balance between spatial specificity and computational efficiency. In this tutorial, we will continue to use functional regions of interest from our 50 ROI Neurosynth parcellation. This allows us to cover the entire brain with a relatively course spatial granularity, but requires several orders of magnitude of less computations than using a voxelwise searchlight approach. This means it will run much faster and will require us to use a considerably less stringent statistical threshold to correct for all independent tests. For example, for 50 tests bonferroni correction is p < 0.001 (i.e., .05/50). If we ever wanted better spatial granularity we could use increasingly larger parcellations (e.g., [100 or 200](https://neurovault.org/collections/2099/)).
#
# Let's load our parcellation mask so that we can examine the pattern similarity across these conditions for each ROI.

# In[78]:

mask = Brain_Data(os.path.join('..', 'masks', 'k50_2mm.nii.gz'))
mask_x = expand_mask(mask)

mask.plot()

# Ok, now we will want to calculate the pattern similar within each ROI across the 10 conditions.
#
# We will loop over each ROI and extract the pattern data across all conditions and then compute the correlation distance between each condition. This data will now be an `Adjacency` object that we discussed in the Lab 13: Connectivity. We will temporarily store this in a list.
#
# Notice that for each iteration of the loop we apply the ROI mask to our beta images and then calculate the correlation distance.

# In[79]:

out = []
for m in mask_x:
    out.append(beta.apply_mask(m).distance(metric='correlation'))

# Let's plot an example ROI and it's associated distance matrix.
#
Ejemplo n.º 2
0
# %% Loop over subject directories
# inspired by https://stackoverflow.com/questions/43619896/python-pandas-iterate-over-rows-and-access-column-names
for dirname in next(os.walk('.'))[1]:

    # constructing whole file path
    fp_anat = str(pathlib.PurePath(workdir, dirname, anat))
    fp_anat_betmask = str(pathlib.PurePath(workdir, dirname, anat_betmask))
    fp_cbfmap = str(pathlib.PurePath(workdir, dirname, cbfmap))

    # constructing image title
    img_title = ['CBF', dirname]

    # skull-strip masking vs without
    img_BD_noMask = Brain_Data(fp_cbfmap)
    img_BD_noMask.plot(anatomical=fp_anat,
                       title=" - ".join(img_title + ['BDnoMask']),
                       output_file="_".join(img_title + ['BDnoMask']))

    img_BD = Brain_Data(fp_cbfmap,
                        mask=fp_anat_betmask)  # applying skull-strip mask
    img_BD.plot(anatomical=fp_anat,
                title=" - ".join(img_title + ['BDwithMask']),
                output_file="_".join(img_title + ['BDwithMask']))

    # read in subject-specific roi.csv file
    subj_roi_df = pd.read_csv(
        str(pathlib.PurePath(workdir, dirname, 'LRTC_roi_10mm.csv')))

    # iterating roi.csv file within each subject's folder
    for row in subj_roi_df.itertuples():