Пример #1
0
 def test_coactivation(self):
     """ Test seed-based coactivation. """
     tempdir = tempfile.mkdtemp()
     seed_img = get_test_data_path() + 'sgacc_mask.nii.gz'
     network.coactivation(self.dataset, seed_img, output_dir=tempdir,
                          prefix='test', r=20)
     filter = os.path.join(tempdir, 'test*.nii.gz')
     files = glob(filter)
     self.assertEquals(len(files), 9)
     shutil.rmtree(tempdir)
 def test_coactivation(self):
     """ Test seed-based coactivation. """
     tempdir = tempfile.mkdtemp()
     seed_img = get_test_data_path() + 'sgacc_mask.nii.gz'
     network.coactivation(self.dataset,
                          seed_img,
                          output_dir=tempdir,
                          prefix='test',
                          r=20)
     filter = os.path.join(tempdir, 'test*.nii.gz')
     files = glob(filter)
     self.assertEquals(len(files), 9)
     shutil.rmtree(tempdir)
ma.save_results('recognition_vs_recollection')

# <markdowncell>

# This produces the same set of maps we've seen before, except the images now represent a meta-analytic contrast between two specific sets of studies, rather than between one set of studies and all other studies in the database.
# 
# It's worth noting that meta-analytic contrasts generated using Neurosynth should be interpreted very cautiously. Remember that this is a meta-analytic contrast rather than a meta-analysis of contrasts. In the above example, we're comparing activation in all studies in which the term recognition shows up often to activation in all studies in which the term recollection shows up often (implicitly excluding studies that use both terms). We are NOT meta-analytically combining direct contrasts of recollection and recognition, which would be a much more sensible thing to do (but is something that can't be readily automated).
# 
# ### Seed-based coactivation maps
# 
# By now you're all familiar with seed-based functional connectivity. We can do something very similar at a meta-analytic level (e.g., Toro et al, 2008, Robinson et al, 2010, Chang et al, 2012) using the Neurosynth data. Specifically, we can define a seed region and then ask what other regions tend to be reported in studies that report activity in our seed region. The Neurosynth tools make this very easy to do. We can either pass in a mask image defining our ROI, or pass in a list of coordinates to use as the centroid of spheres. In this example, we'll do the latter:

# <codecell>

# Seed-based coactivation
network.coactivation(dataset, [[0, 20, 28]], threshold=0.1, outroot='coactivation_from_coords', r=10)

# <markdowncell>

# Here we're generating a coactivation map for a sphere with radius 10 mm centered on an anterior cingulate cortex (ACC) voxel. The threshold argument indicates what proportion of voxels within the ACC sphere have to be activated for a study to be considered 'active'.
# 
# In general, meta-analytic coactivation produces results quite similar--but substantially less spatially specific--than time series-based functional connectivity. Note that if you're only interested in individual points in the brain, you can find precomputed coactivation maps for spheres centered on every gray matter voxel in the brain on the Neurosynth website.
# 
# ### Decoding your own images
# 
# One of the most useful features of Neurosynth is the ability to 'decode' arbitrary images by assessing their similarity to the reverse inference meta-analysis maps generated for different terms. For example, you might wonder whether a group-level z-score map for some experimental contrast is more consistent with recollection or with recognition. You could even use Neurosynth as a simple (but often effective) classifier by running a series of individual subjects through the decoder and picking the class (i.e., term) with the highest similarity. Perhaps the most powerful--though somewhat more computationally intensive--use is to do open-ended decoding. That is, we can take the entire set of features included in the base Neurosynth data download and rank-order them by similarity to each of our input images.
# 
# In this example, we'll decode three insula-based coactivation networks drawn from Chang, Yarkoni, Khaw, & Sanfey (2012). You should substitute your own images into the list below. We assess the similarity of each map with respect to 9 different terms and save the results to a file. Note that if we left the features argument unspecified, the decoder would default to using the entire set of 500+ features (which will take a few minutes on most machines unless you've pregenerated the feature maps--but that's for a different tutorial).

# <codecell>