Skip to content

High resolution postmortem image registration and layer reconstruction of human cortex from serially-sectioned of in situ hybridization images.

License

Notifications You must be signed in to change notification settings

richstoner/postmortem-processing-tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Postmortem Processing Tools

High resolution postmortem image registration and layer reconstruction of human cortex from serially-sectioned of in situ hybridization images.


0. Quick Start:

  1. Install Vagrant http://www.vagrantup.com/downloads.html

  2. Clone this repository to a folder on your local machine

    git clone https://github.com/richstoner/postmortem-processing-tools.git

  3. cd into the directory

    cd postmortem-processing-tools

  4. Launch the instance

    vagrant up

  5. Wait for the instance to boot, the go to http://192.2.2.2 in a web browser for more information.


1. Table of Contents

  1. Table of Contents
  2. Problem description
    1. Registration
    2. Point extraction
    3. Reconstruction
  3. 3D reconstruction pipeline
    1. Overview
    2. Using the vagrant box
    3. Using the IPython notebooks
    4. Navigating the source code
  4. Details about the processing environment
  5. Caveats

2. Problem Description

2.1. Registration

Given a series of labeled high resolution images from a piece of human cortex, find a way to register the images into a single stack. Challenges:

1) The images are very large (50k x 25k pixels, 1px = 1µm^2)

2) Each image contains 2 sections of tissue

3) Each label stains the tissue different

4) Most of the ISH labels are very low contrast

5) No fiducials (such as blockface images) were collected at time of processing

2.2. Point extraction

For each labeled image, extract the location of labeled cells. Challenges:

1) The images are very large (50k x 25k pixels, 1px = 1µm^2)

2) Each image requires background removal

2.3. Reconstruction

For all pointsets for a single label, generate a volume representation of approximate expression. Challenges:

1) Each point set needs to be transformed with the appropriate transform generated from the stack registration

2) Each 2D density representation has to be generated by convolving the transformed point cloud with a 2D gaussian

3) Generating a 3D volume from 2D planes requires interpolating missing planes

Studies involving postmortem tissue are severely limited by the quantity and quality of the tissue available. In many scenarios, the initial experimental design fails to capture the feature of interest.

3. Pipeline processing environment

3.1. Overview

The processing pipeline consists of three primary modules: stack registration, point extraction, and reconstruction. To run the processing steps necessary, we have created a virutal machine with the necessary configuration.

We are providing a vagrant box with this release http://vagrantup.com. The vagrant box provides a simple way to get started and check out some of the key functions in the pipeline. However, given the computational requirements for some of the tasks, it may be better to deploy a similar configuration on a large cluster or cloud instance. The original processing pipeline was run on local resources and eventually Amazon's EC2 compute cloud.

3.2. Using the vagrant box

  1. Install Vagrant http://www.vagrantup.com/downloads.html

  2. Clone this repository to a folder on your local machine

    git clone https://github.com/richstoner/postmortem-processing-tools.git

  3. cd into the directory

    cd postmortem-processing-tools

  4. Launch the instance

    vagrant up

  5. On first load, this will download the virtual machine image from vagrant cloud. It may take upto an hour to download (2.6GB, hosted on AWS S3). Once downloaded the instance will boot by itself.

  6. After boot, go to http://192.2.2.2 in a web browser for more information.

3.3. Using the IPython notebooks

Severa examples have been made availble as IPython (http://ipython.org) notebooks. Once the instance has booted, you can get to these notebooks by going to http://192.2.2.2:8888 in a web browser (chrome preferred). Be advised, the IPython notebooks have no restrictions in place regarding user access. Take proper precaution before making these ports available as public-facing web sites.

**Troubleshooting: ** If IPython does not appear to be running, first confirm the instance itself is runnning. Second, check the status of the IPython process via the Supervisor web interface: http://192.2.2.2:9999.

3.4. Navigating the source code

The source code consists of three major components, all located in the /src directory:

python - The majority of the code is contained within the python directory. Two main modules: aibs & pmip, provide most of the functionality for the processing pipeline. The IPython notebooks demonstrate how to use some of this functionality.

fijimacros - These are ImageJ / FIJI macros that get run via command-line from a python wrapper (defined in pmip)

itkcpp - Several c++ files used for efficient image registration via ITK. Compiled binaries from this source code is located in /bin.

IPython notebooks are located in the /notebooks, and this documentation is located in /documentation.

4. Details about the processing environment

Currently on vagrant cloud: https://vagrantcloud.com/richstoner/postmortem-ipython-precise64

Based on: hashicorp/precise64 (ubuntu 12.04.3)

Here is a short list of what you'd need to build your own version of this environment:

  • ITK 3.x
  • FIJI (ImageJ)
  • Imagemagick
  • Avconv
  • Python 2.7
  • Core scientific python components in anaconda distribution.

Note: this pipeline was developed over the span of 3 years - most of the toolchain could be reduced to ITK + python modules.

Before installing anything

While logged in to the instance:

sudo apt-get update; sudo apt-get upgrade

Python dependencies

Install Conda (from contiuum analytics)

wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh

chmod +x ./Miniconda-latest-Linux-x86_64.sh

./Miniconda-latest-Linux-x86_64.sh

Install additional dependencies

conda install anaconda

FIJI

Getting FIJI

wget http://sourceforge.net/projects/fiji-bi/files/fiji/Madison/fiji-linux64-20110307.tar.bz2/download

mv download fiji.tar.bz2

tar xvjf fiji.tar.bz2

ITK (v3.2)

Getting ITK v3.2 (hard version requirement)

wget http://sourceforge.net/projects/itk/files/itk/3.20/InsightToolkit-3.20.1.tar.gz/download

mv download itk32.tar.gz

tar xvzf itk32.tar.gz

Building ITK requires several steps and is not documented here. For more information, visit http://itk.org/ITK/resources/software.html

Imagemagick & avconv

To provide some additional commandline capabilities and video generation, we use avconv and imagemagick's convert & composite tools.

sudo apt-get install libav-tools

sudo apt-get install imagemagick

Process management

Install supervisor (to manage ipython)

sudo apt-get install supervisor

Updated supervisor config to a) enable http and b) start ipython notebook (Details not shown)

5. Caveats

Not all steps present codified - Unfortunately, several steps have not been included in this release as they are primarily manual steps, specifically the final interpolation from generated volume stack to scale-correct volume stacks - and rendering. We leave this as an exercise to the user to figure out.

  • For final interpolation, use ImageJ or FIJI
  • For rendering, we suggest using either a) Osirix for volume rendering or b) UCSF Chimera for volume + surface rendering.

Some processing steps require a specific machine configuration - Several of the processing steps may not run (correctly, or at all) on the vagrant virtual machine. Consider this instance a preview of the pipeline rather than a fire-and-forget solution.

**Examples: **

Running ImageJ / Fiji particle analysis requires a workstation with X11 running (or hours of experimentation with Xvfb). It also requires ~10G available RAM.

Extracting points via scikit image requires a machine with 10G available RAM. This could be avoided by optimizing around the RGB->HSV conversion of the large in-memory images.

About

High resolution postmortem image registration and layer reconstruction of human cortex from serially-sectioned of in situ hybridization images.

Resources

License

Stars

Watchers

Forks

Packages

No packages published