3D segmentation

Input format

Tiffs with multiple planes and multiple channels are supported in the GUI (can drag-and-drop tiffs) and supported when running in a notebook. To open the GUI with z-stack support, use python -m cellpose --Zstack. Multiplane images should be of shape nplanes x channels x nY x nX or as nplanes x nY x nX. You can test this by running in python

import tifffile
data = tifffile.imread('img.tif')
print(data.shape)

If drag-and-drop of the tiff into the GUI does not work correctly, then it’s likely that the shape of the tiff is incorrect. If drag-and-drop works (you can see a tiff with multiple planes), then the GUI will automatically run 3D segmentation and display it in the GUI. Watch the command line for progress. It is recommended to use a GPU to speed up processing.

In the CLI/notebook, you can specify the channel_axis and/or z_axis parameters to specify the axis (0-based) of the image which corresponds to the image channels and to the z axis. For example an image with 2 channels of shape (1024,1024,2,105,1) can be specified with channel_axis=2 and z_axis=3. If channel_axis=None cellpose will try to automatically determine the channel axis by choosing the dimension with the minimal size after squeezing. If z_axis=None cellpose will automatically select the first non-channel axis of the image to be the Z axis. These parameters can be specified using the command line with --channel_axis or --z_axis or as inputs to model.eval for the Cellpose or CellposeModel model.

Volumetric stacks do not always have the same sampling in XY as they do in Z. Therefore you can set an anisotropy parameter in CLI/notebook to allow for differences in sampling, e.g. set to 2.0 if Z is sampled half as dense as X or Y, and then in the algorithm Z is upsampled by 2x.

Segmentation settings

The default segmentation in the GUI is 2.5D segmentation, where the flows are computed on each YX, ZY and ZX slice and then averaged, and then the dynamics are run in 3D. Specify this segmentation format in the notebook with do_3D=True or in the CLI with --do_3D (with the CLI it will segment all tiffs in the folder as 3D tiffs if possible).

If you see many cells that are fragmented, you can smooth the flows before the dynamics are run in 3D using the flow3D_smooth parameter, which specifies the standard deviation of a Gaussian for smoothing the flows. The default is 0.0, which means no smoothing. Alternatively/additionally, you may want to train a model on 2D slices from your 3D data to improve the segmentation (see below).

The network rescales images using the user diameter and the model diam_mean (usually 30), so for example if you input a diameter of 90 and the model was trained with a diameter of 30, then the image will be downsampled by a factor of 3 for computing the flows. If resample is enabled, then the image will then be upsampled for finding the masks. This will take additional CPU and GPU memory, so for 3D you may want to set resample=False or in the CLI --no_resample (more details here Resample).

3D segmentation ignores the flow_threshold because we did not find that it helped to filter out false positives in our test 3D cell volume. Instead, we found that setting min_size is a good way to remove false positives. Note that min_size applies per slice when stitch_threshold is used, you will need to remove masks afterwards if you have a 3D minimum size to apply.

There may be additional differences in YZ and XZ slices that make them unable to be used for 3D segmentation. I’d recommend viewing the volume in those dimensions if the segmentation is failing, using the orthoviews (activate in the bottom left of the GUI). In those instances, you may want to turn off 3D segmentation (do_3D=False) and run instead with stitch_threshold>0. Cellpose will create ROIs in 2D on each XY slice and then stitch them across slices if the IoU between the mask on the current slice and the next slice is greater than or equal to the stitch_threshold. Alternatively, you can train a separate model for YX slices vs ZY and ZX slices, and then specify the separate model for ZY/ZX slices using the pretrained_model_ortho option in CellposeModel.

Another option is to try to deblur and upsample anisotropic volumes, as described in the Cellpose3 paper. We have trained a model for this on cyto2 and on nuclei, the models are aniso_cyto2 and aniso_nuclei. You can apply each model to each channel in a volume one at a time and then use both channels for segmentation. Here is example code for a 3D stack with one channel (ZYX) with 3x less sampling in Z than in XY:

from cellpose import io, denoise, transforms
io.logger_setup()
img0 = io.imread("volume.tif")
anisotropy = 3
shape = img0.shape
print(shape)

# upsample Z dimension to make the volume isotropic
new_shape = [shape[0] * anisotropy, shape[1], shape[2]]
img = transforms.resize_image(img0.astype("float32").transpose(1,0,2), Ly=new_shape[0], Lx=new_shape[2],
                            no_channels=True).transpose(1,0,2)
img = transforms.resize_image(img, Ly=new_shape[1], Lx=new_shape[2],
                            no_channels=True)
print(img.shape)

# create DenoiseModel with anisotropic deblurring+upsampling model
dn_model = denoise.DenoiseModel(model_type="aniso_cyto2", gpu=True)

# apply model on ZY slices
img_iso = dn_model.eval(img.transpose(1, 0, 2), diameter=30, z_axis=0, channels=[0,0])
img_iso = img_iso.squeeze().transpose(1, 0, 2)
# (optional) apply model on ZX slices and average with ZY
img_iso2 = dn_model.eval(img.transpose(2, 0, 1), diameter=30, z_axis=0, channels=[0,0])
img_iso2 = img_iso2.squeeze().transpose(1, 2, 0)
img_iso += img_iso2
img_iso /= 2

Training for 3D segmentation

You can create image crops from z-stacks (in YX, YZ and XZ) using the script cellpose/gui/make_train.py. If you have anisotropic volumes, then set the --anisotropy flag to the ratio between pixel size in Z and in YX, e.g. set --anisotropy 5 for pixel size of 1.0 um in YX and 5.0 um in Z. Now you can drag-and-drop an image from the folder into the GUI and start to re-train a model by labeling your crops and using the Train option in the GUI (see the Cellpose2 tutorial for more advice). If the model with all crops isn’t working well, you can alternatively separate the crops into two folders (YX and ZY/ZX) and train separate networks, and use pretrained_model_ortho when declaring your model.

See the help message for more information:

python cellpose\gui\make_train.py --help
usage: make_train.py [-h] [--dir DIR] [--image_path IMAGE_PATH] [--look_one_level_down] [--img_filter IMG_FILTER]
                    [--channel_axis CHANNEL_AXIS] [--z_axis Z_AXIS] [--chan CHAN] [--chan2 CHAN2] [--invert]
                    [--all_channels] [--anisotropy ANISOTROPY] [--sharpen_radius SHARPEN_RADIUS]
                    [--tile_norm TILE_NORM] [--nimg_per_tif NIMG_PER_TIF] [--crop_size CROP_SIZE]

cellpose parameters

options:
-h, --help            show this help message and exit

input image arguments:
--dir DIR             folder containing data to run or train on.
--image_path IMAGE_PATH
                        if given and --dir not given, run on single image instead of folder (cannot train with this
                        option)
--look_one_level_down
                        run processing on all subdirectories of current folder
--img_filter IMG_FILTER
                        end string for images to run on
--channel_axis CHANNEL_AXIS
                        axis of image which corresponds to image channels
--z_axis Z_AXIS       axis of image which corresponds to Z dimension
--chan CHAN           channel to segment; 0: GRAY, 1: RED, 2: GREEN, 3: BLUE. Default: 0
--chan2 CHAN2         nuclear channel (if cyto, optional); 0: NONE, 1: RED, 2: GREEN, 3: BLUE. Default: 0
--invert              invert grayscale channel
--all_channels        use all channels in image if using own model and images with special channels
--anisotropy ANISOTROPY
                        anisotropy of volume in 3D

algorithm arguments:
--sharpen_radius SHARPEN_RADIUS
                        high-pass filtering radius. Default: 0.0
--tile_norm TILE_NORM
                        tile normalization block size. Default: 0
--nimg_per_tif NIMG_PER_TIF
                        number of crops in XY to save per tiff. Default: 10
--crop_size CROP_SIZE
                        size of random crop to save. Default: 512