A fork of napari/napari intended for use at the Allen Institute for Cell Science. There is no support for this fork in any way shape or form, and instructions or documentation will differ from the official repo.
napari is a fast, interactive, multi-dimensional image viewer for Python. It's designed for browsing, annotating, and analyzing large multi-dimensional images. It's built on top of PyQt
(for the GUI), vispy
(for performant GPU-based rendering), and the scientific Python stack (numpy
, scipy
).
We're developing napari in the open! But the project is in a pre-alpha stage. You can follow progress on this repository, test out new versions as we release them, and contribute ideas and code. Expect breaking changes from patch to patch.
napari can be installed on most Mac OS X and Linux systems with Python 3.6 or 3.7 by calling.
$ conda create -n napari python=3.7
$ conda activate napari
$ git clone https://github.com/AllenCellModeling/napari.git
$ cd napari
$ pip install -e .
The code is set up with useful defaults and is configured to to copy the files over to your computer first. Please make sure you have 27.9 gb of space free for images before you start.
First start finder and press ⌘ + k to bring up the "Connect to Server" menu. Enter smb://allen
and mount the aics
drive. This allows us to automagically copy files over from the file system.
Then start the app. It should start copying over files for you. Once this is done, you should be able to disconnect from the network an annotate stuff remotely. Those results will be saved to your harddrive in the ./data/annotations folder
.
After the napari
environment is activated in conda, run the following command in the Teriminal app:
$ python annotator.py
By default the code uses a .json file that lives in ./data/experiment.json
. That file looks like this:
{
"annotator": "user",
"data_csv": "./data/20190520_napari_annotation_files.csv",
"data_dir_local": "./data/images",
"save_dir": "./data/annotations",
"start_from_last_annotation": 1,
"save_if_empty": 0
}
These parameters should be self explanatory. Please change this configuration to suit your needs.
To run the annotator with a different .json file do:
$ python annotator.py --prefs_path "/path/to/your/file.json"
Check out the scripts in the examples
folder to see some of the functionality we're developing!
For example, you can add multiple images in different layers and adjust them
from skimage import data
from skimage.color import rgb2gray
from napari import ViewerApp
from napari.util import app_context
with app_context():
# create the viewer with four layers
viewer = ViewerApp(astronaut=rgb2gray(data.astronaut()),
photographer=data.camera(),
coins=data.coins(),
moon=data.moon())
# remove a layer
viewer.layers.remove('coins')
# swap layer order
viewer.layers['astronaut', 'moon'] = viewer.layers['moon', 'astronaut']
You can add markers on top of an image
from skimage import data
from skimage.color import rgb2gray
from napari import ViewerApp
from napari.util import app_context
with app_context():
# setup viewer
viewer = ViewerApp()
viewer.add_image(rgb2gray(data.astronaut()))
# create three xy coordinates
points = np.array([[100, 100], [200, 200], [333, 111]])
# specify three sizes
size = np.array([10, 20, 20])
# add them to the viewer
markers = viewer.add_markers(points, size=size)
napari supports bidirectional communication between the viewer and the Python kernel, which is especially useful in Jupyter notebooks -- in the example above you can retrieve the locations of the markers, including any additional ones you have drawn, by calling
>>> markers.coords
[[100, 100],
[200, 200],
[333, 111]]
You can render and quickly browse slices of multi-dimensional arrays
import numpy as np
from skimage import data
from napari import ViewerApp
from napari.util import app_context
with app_context():
# create fake 3d data
blobs = np.stack([data.binary_blobs(length=128, blob_size_fraction=0.05,
n_dim=3, volume_fraction=f)
for f in np.linspace(0.05, 0.5, 10)], axis=-1)
# add data to the viewer
viewer = ViewerApp(blobs.astype(float))
You can draw lines and polygons on an image, including selection and adjustment of shapes and vertices, and control over fill and stroke color. Run examples/add_shapes.py
to generate and interact with the following example.
You can also paint pixel-wise labels, useful for creating masks for segmentation, and fill in closed regions using the paint bucket. Run examples/labels-0-2d.py
to generate and interact with the following example.
We're working on several features, including
- support for 3D volumetric rendering
- support for multiple canvases
- a plugin ecosystem for integrating image processing and machine learning tools
See this issue for some of the key use cases we're trying to enable, and feel free to add comments or ideas!
Contributions are encouraged! Please read our guide to get started. Given that we're in an early stage, you may want to reach out on Github Issues before jumping in.