Requirements are listed in requirement.txt
.
You should create a file segmentation/config.py
with variable pointing to the HCP data:
DATA_DIR = "/path/to/hcp/repo"
CACHE_DIR = "/path/to/cache/dir"
- HCP 900 for which we have
- A cleaned AC/PC-aligned T1w image of each subject brain (not in MNI space)
- The Destrieux atlas in voxel space
Data can be downloaded within the Inria network
rsync -avzh --prune-empty-dirs --include="*/" --include="*/T1w/T1*brain.nii.gz" --include="*/T1w/*a2009*.nii.gz" --exclude="*" -e ssh dragostore:/data/data/HCP900/* .
- Implementation based on PyTorch
- Overleaf account of where we are
- Will upload a Boto-based script to download the files
- Keras (Install with
conda install -c conda-forge keras
) - An implementation of the U-net
- Generalise it to 3D
- Develop a framework to test performance
- Tune the network architecture
- Check it’s the resulting network’s capability to detect sulci that 3) Check it’s the resulting network’s capability to detect sulci that have never been labeled. For this we might want to exclude some sulci from the training set (and not just some subjects)
- Old style and might try to name the sulci which we don't want to.
- Brain segmentation in grey matter, white matter, and subcortical structures