Cellpose API Guide
Cellpose class
- class cellpose.models.Cellpose(gpu=False, model_type='cyto', net_avg=False, device=None)[source]
main model which combines SizeModel and CellposeModel
- Parameters:
gpu (bool (optional, default False)) – whether or not to use GPU, will check if GPU available
model_type (str (optional, default 'cyto')) – ‘cyto’=cytoplasm model; ‘nuclei’=nucleus model; ‘cyto2’=cytoplasm model with additional user images
net_avg (bool (optional, default False)) – loads the 4 built-in networks and averages them if True, loads one network if False
device (torch device (optional, default None)) – device used for model running / training (torch.device(‘cuda’) or torch.device(‘cpu’)), overrides gpu input, recommended if you want to use a specific GPU (e.g. torch.device(‘cuda:1’))
- eval(x, batch_size=8, channels=None, channel_axis=None, z_axis=None, invert=False, normalize=True, diameter=30.0, do_3D=False, anisotropy=None, net_avg=False, augment=False, tile=True, tile_overlap=0.1, resample=True, interp=True, flow_threshold=0.4, cellprob_threshold=0.0, min_size=15, stitch_threshold=0.0, rescale=None, progress=None, model_loaded=False)[source]
run cellpose and get masks
- Parameters:
x (list or array of images) – can be list of 2D/3D images, or array of 2D/3D images, or 4D image array
batch_size (int (optional, default 8)) – number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)
channels (list (optional, default None)) – list of channels, either of length 2 or of length number of images by 2. First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to segment grayscale images, input [0,0]. To segment images with cells in green and nuclei in blue, input [2,3]. To segment one grayscale image and one image with cells in green and nuclei in blue, input [[0,0], [2,3]].
channel_axis (int (optional, default None)) – if None, channels dimension is attempted to be automatically determined
z_axis (int (optional, default None)) – if None, z dimension is attempted to be automatically determined
invert (bool (optional, default False)) – invert image pixel intensity before running network (if True, image is also normalized)
normalize (bool (optional, default True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
diameter (float (optional, default 30.)) – if set to None, then diameter is automatically estimated if size model is loaded
do_3D (bool (optional, default False)) – set to True to run 3D segmentation on 4D image input
anisotropy (float (optional, default None)) – for 3D segmentation, optional rescaling factor (e.g. set to 2.0 if Z is sampled half as dense as X or Y)
net_avg (bool (optional, default False)) – runs the 4 built-in networks and averages them if True, runs one network if False
augment (bool (optional, default False)) – tiles image with overlapping tiles and flips overlapped regions to augment
tile (bool (optional, default True)) – tiles image to ensure GPU/CPU memory usage limited (recommended)
tile_overlap (float (optional, default 0.1)) – fraction of overlap of tiles when computing flows
resample (bool (optional, default True)) – run dynamics at original image size (will be slower but create more accurate boundaries)
interp (bool (optional, default True)) – interpolate during 2D dynamics (not available in 3D) (in previous versions it was False)
flow_threshold (float (optional, default 0.4)) – flow error threshold (all cells with errors below threshold are kept) (not used for 3D)
cellprob_threshold (float (optional, default 0.0)) – all pixels with value above threshold kept for masks, decrease to find more and larger masks
min_size (int (optional, default 15)) – minimum number of pixels per mask, can turn off with -1
stitch_threshold (float (optional, default 0.0)) – if stitch_threshold>0.0 and not do_3D and equal image sizes, masks are stitched in 3D to return volume segmentation
rescale (float (optional, default None)) – if diameter is set to None, and rescale is not None, then rescale is used instead of diameter for resizing image
progress (pyqt progress bar (optional, default None)) – to return progress bar status to GUI
model_loaded (bool (optional, default False)) – internal variable for determining if model has been loaded, used in __main__.py
- Returns:
masks (list of 2D arrays, or single 3D array (if do_3D=True)) – labelled image, where 0=no masks; 1,2,…=mask labels
flows (list of lists 2D arrays, or list of 3D arrays (if do_3D=True)) – flows[k][0] = XY flow in HSV 0-255 flows[k][1] = XY flows at each pixel flows[k][2] = cell probability (if > cellprob_threshold, pixel used for dynamics) flows[k][3] = final pixel locations after Euler integration
styles (list of 1D arrays of length 256, or single 1D array (if do_3D=True)) – style vector summarizing each image, also used to estimate size of objects in image
diams (list of diameters, or float (if do_3D=True))
CellposeModel
- class cellpose.models.CellposeModel(gpu=False, pretrained_model=False, model_type=None, net_avg=False, diam_mean=30.0, device=None, residual_on=True, style_on=True, concatenation=False, nchan=2)[source]
- Parameters:
gpu (bool (optional, default False)) – whether or not to save model to GPU, will check if GPU available
pretrained_model (str or list of strings (optional, default False)) – full path to pretrained cellpose model(s), if None or False, no model loaded
model_type (str (optional, default None)) – any model that is available in the GUI, use name in GUI e.g. ‘livecell’ (can be user-trained or model zoo)
net_avg (bool (optional, default False)) – loads the 4 built-in networks and averages them if True, loads one network if False
diam_mean (float (optional, default 30.)) – mean ‘diameter’, 30. is built in value for ‘cyto’ model; 17. is built in value for ‘nuclei’ model; if saved in custom model file (cellpose>=2.0) then it will be loaded automatically and overwrite this value
device (torch device (optional, default None)) – device used for model running / training (torch.device(‘cuda’) or torch.device(‘cpu’)), overrides gpu input, recommended if you want to use a specific GPU (e.g. torch.device(‘cuda:1’))
residual_on (bool (optional, default True)) – use 4 conv blocks with skip connections per layer instead of 2 conv blocks like conventional u-nets
style_on (bool (optional, default True)) – use skip connections from style vector to all upsampling layers
concatenation (bool (optional, default False)) – if True, concatentate downsampling block outputs with upsampling block inputs; default is to add
nchan (int (optional, default 2)) – number of channels to use as input to network, default is 2 (cyto + nuclei) or (nuclei + zeros)
- eval(x, batch_size=8, channels=None, channel_axis=None, z_axis=None, normalize=True, invert=False, rescale=None, diameter=None, do_3D=False, anisotropy=None, net_avg=False, augment=False, tile=True, tile_overlap=0.1, resample=True, interp=True, flow_threshold=0.4, cellprob_threshold=0.0, compute_masks=True, min_size=15, stitch_threshold=0.0, progress=None, loop_run=False, model_loaded=False)[source]
segment list of images x, or 4D array - Z x nchan x Y x X
- Parameters:
x (list or array of images) – can be list of 2D/3D/4D images, or array of 2D/3D/4D images
batch_size (int (optional, default 8)) – number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)
channels (list (optional, default None)) – list of channels, either of length 2 or of length number of images by 2. First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to segment grayscale images, input [0,0]. To segment images with cells in green and nuclei in blue, input [2,3]. To segment one grayscale image and one image with cells in green and nuclei in blue, input [[0,0], [2,3]].
channel_axis (int (optional, default None)) – if None, channels dimension is attempted to be automatically determined
z_axis (int (optional, default None)) – if None, z dimension is attempted to be automatically determined
normalize (bool (default, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
invert (bool (optional, default False)) – invert image pixel intensity before running network
diameter (float (optional, default None)) – diameter for each image, if diameter is None, set to diam_mean or diam_train if available
rescale (float (optional, default None)) – resize factor for each image, if None, set to 1.0; (only used if diameter is None)
do_3D (bool (optional, default False)) – set to True to run 3D segmentation on 4D image input
anisotropy (float (optional, default None)) – for 3D segmentation, optional rescaling factor (e.g. set to 2.0 if Z is sampled half as dense as X or Y)
net_avg (bool (optional, default False)) – runs the 4 built-in networks and averages them if True, runs one network if False
augment (bool (optional, default False)) – tiles image with overlapping tiles and flips overlapped regions to augment
tile (bool (optional, default True)) – tiles image to ensure GPU/CPU memory usage limited (recommended)
tile_overlap (float (optional, default 0.1)) – fraction of overlap of tiles when computing flows
resample (bool (optional, default True)) – run dynamics at original image size (will be slower but create more accurate boundaries)
interp (bool (optional, default True)) – interpolate during 2D dynamics (not available in 3D) (in previous versions it was False)
flow_threshold (float (optional, default 0.4)) – flow error threshold (all cells with errors below threshold are kept) (not used for 3D)
cellprob_threshold (float (optional, default 0.0)) – all pixels with value above threshold kept for masks, decrease to find more and larger masks
compute_masks (bool (optional, default True)) – Whether or not to compute dynamics and return masks. This is set to False when retrieving the styles for the size model.
min_size (int (optional, default 15)) – minimum number of pixels per mask, can turn off with -1
stitch_threshold (float (optional, default 0.0)) – if stitch_threshold>0.0 and not do_3D, masks are stitched in 3D to return volume segmentation
progress (pyqt progress bar (optional, default None)) – to return progress bar status to GUI
loop_run (bool (optional, default False)) – internal variable for determining if model has been loaded, stops model loading in loop over images
model_loaded (bool (optional, default False)) – internal variable for determining if model has been loaded, used in __main__.py
- Returns:
masks (list of 2D arrays, or single 3D array (if do_3D=True)) – labelled image, where 0=no masks; 1,2,…=mask labels
flows (list of lists 2D arrays, or list of 3D arrays (if do_3D=True)) – flows[k][0] = XY flow in HSV 0-255 flows[k][1] = XY flows at each pixel flows[k][2] = cell probability (if > cellprob_threshold, pixel used for dynamics) flows[k][3] = final pixel locations after Euler integration
styles (list of 1D arrays of length 64, or single 1D array (if do_3D=True)) – style vector summarizing each image, also used to estimate size of objects in image
- train(train_data, train_labels, train_files=None, test_data=None, test_labels=None, test_files=None, channels=None, normalize=True, save_path=None, save_every=100, save_each=False, learning_rate=0.2, n_epochs=500, momentum=0.9, SGD=True, weight_decay=1e-05, batch_size=8, nimg_per_epoch=None, rescale=True, min_train_masks=5, model_name=None)[source]
train network with images train_data
- Parameters:
train_data (list of arrays (2D or 3D)) – images for training
train_labels (list of arrays (2D or 3D)) – labels for train_data, where 0=no masks; 1,2,…=mask labels can include flows as additional images
train_files (list of strings) – file names for images in train_data (to save flows for future runs)
test_data (list of arrays (2D or 3D)) – images for testing
test_labels (list of arrays (2D or 3D)) – labels for test_data, where 0=no masks; 1,2,…=mask labels; can include flows as additional images
test_files (list of strings) – file names for images in test_data (to save flows for future runs)
channels (list of ints (default, None)) – channels to use for training
normalize (bool (default, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
save_path (string (default, None)) – where to save trained model, if None it is not saved
save_every (int (default, 100)) – save network every [save_every] epochs
learning_rate (float or list/np.ndarray (default, 0.2)) – learning rate for training, if list, must be same length as n_epochs
n_epochs (int (default, 500)) – how many times to go through whole training set during training
weight_decay (float (default, 0.00001)) –
SGD (bool (default, True)) – use SGD as optimization instead of RAdam
batch_size (int (optional, default 8)) – number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)
nimg_per_epoch (int (optional, default None)) – minimum number of images to train on per epoch, with a small training set (< 8 images) it may help to set to 8
rescale (bool (default, True)) – whether or not to rescale images to diam_mean during training, if True it assumes you will fit a size model after training or resize your images accordingly, if False it will try to train the model to be scale-invariant (works worse)
min_train_masks (int (default, 5)) – minimum number of masks an image must have to use in training set
model_name (str (default, None)) – name of network, otherwise saved with name as params + training start time
SizeModel
- class cellpose.models.SizeModel(cp_model, device=None, pretrained_size=None, **kwargs)[source]
linear regression model for determining the size of objects in image used to rescale before input to cp_model uses styles from cp_model
- Parameters:
cp_model (UnetModel or CellposeModel) – model from which to get styles
device (torch device (optional, default None)) – device used for model running / training (torch.device(‘cuda’) or torch.device(‘cpu’)), overrides gpu input, recommended if you want to use a specific GPU (e.g. torch.device(‘cuda:1’))
pretrained_size (str) – path to pretrained size model
- eval(x, channels=None, channel_axis=None, normalize=True, invert=False, augment=False, tile=True, batch_size=8, progress=None, interp=True)[source]
use images x to produce style or use style input to predict size of objects in image
Object size estimation is done in two steps: 1. use a linear regression model to predict size from style in image 2. resize image to predicted size and run CellposeModel to get output masks.
Take the median object size of the predicted masks as the final predicted size.
- Parameters:
x (list or array of images) – can be list of 2D/3D images, or array of 2D/3D images
channels (list (optional, default None)) – list of channels, either of length 2 or of length number of images by 2. First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to segment grayscale images, input [0,0]. To segment images with cells in green and nuclei in blue, input [2,3]. To segment one grayscale image and one image with cells in green and nuclei in blue, input [[0,0], [2,3]].
channel_axis (int (optional, default None)) – if None, channels dimension is attempted to be automatically determined
normalize (bool (default, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
invert (bool (optional, default False)) – invert image pixel intensity before running network
augment (bool (optional, default False)) – tiles image with overlapping tiles and flips overlapped regions to augment
tile (bool (optional, default True)) – tiles image to ensure GPU/CPU memory usage limited (recommended)
progress (pyqt progress bar (optional, default None)) – to return progress bar status to GUI
- Returns:
diam (array, float) – final estimated diameters from images x or styles style after running both steps
diam_style (array, float) – estimated diameters from style alone
- train(train_data, train_labels, test_data=None, test_labels=None, channels=None, normalize=True, learning_rate=0.2, n_epochs=10, l2_regularization=1.0, batch_size=8)[source]
train size model with images train_data to estimate linear model from styles to diameters
- Parameters:
train_data (list of arrays (2D or 3D)) – images for training
train_labels (list of arrays (2D or 3D)) – labels for train_data, where 0=no masks; 1,2,…=mask labels can include flows as additional images
channels (list of ints (default, None)) – channels to use for training
normalize (bool (default, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
n_epochs (int (default, 10)) – how many times to go through whole training set (taking random patches) for styles for diameter estimation
l2_regularization (float (default, 1.0)) – regularize linear model from styles to diameters
batch_size (int (optional, default 8)) – number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)
Metrics
- cellpose.metrics.aggregated_jaccard_index(masks_true, masks_pred)[source]
AJI = intersection of all matched masks / union of all masks
- Parameters:
masks_true (list of ND-arrays (int) or ND-array (int)) – where 0=NO masks; 1,2… are mask labels
masks_pred (list of ND-arrays (int) or ND-array (int)) – ND-array (int) where 0=NO masks; 1,2… are mask labels
- Returns:
aji
- Return type:
aggregated jaccard index for each set of masks
- cellpose.metrics.average_precision(masks_true, masks_pred, threshold=[0.5, 0.75, 0.9])[source]
average precision estimation: AP = TP / (TP + FP + FN)
This function is based heavily on the fast stardist matching functions (https://github.com/mpicbg-csbd/stardist/blob/master/stardist/matching.py)
- Parameters:
masks_true (list of ND-arrays (int) or ND-array (int)) – where 0=NO masks; 1,2… are mask labels
masks_pred (list of ND-arrays (int) or ND-array (int)) – ND-array (int) where 0=NO masks; 1,2… are mask labels
- Returns:
ap (array [len(masks_true) x len(threshold)]) – average precision at thresholds
tp (array [len(masks_true) x len(threshold)]) – number of true positives at thresholds
fp (array [len(masks_true) x len(threshold)]) – number of false positives at thresholds
fn (array [len(masks_true) x len(threshold)]) – number of false negatives at thresholds
- cellpose.metrics.boundary_scores(masks_true, masks_pred, scales)[source]
boundary precision / recall / Fscore
- cellpose.metrics.flow_error(maski, dP_net, use_gpu=False, device=None)[source]
error in flows from predicted masks vs flows predicted by network run on image
This function serves to benchmark the quality of masks, it works as follows 1. The predicted masks are used to create a flow diagram 2. The mask-flows are compared to the flows that the network predicted
If there is a discrepancy between the flows, it suggests that the mask is incorrect. Masks with flow_errors greater than 0.4 are discarded by default. Setting can be changed in Cellpose.eval or CellposeModel.eval.
- Parameters:
maski (ND-array (int)) – masks produced from running dynamics on dP_net, where 0=NO masks; 1,2… are mask labels
dP_net (ND-array (float)) – ND flows where dP_net.shape[1:] = maski.shape
- Returns:
flow_errors (float array with length maski.max()) – mean squared error between predicted flows and flows from masks
dP_masks (ND-array (float)) – ND flows produced from the predicted masks
Flows to masks
- cellpose.dynamics.compute_masks(dP, cellprob, p=None, niter=200, cellprob_threshold=0.0, flow_threshold=0.4, interp=True, do_3D=False, min_size=15, resize=None, use_gpu=False, device=None)[source]
compute masks using dynamics from dP, cellprob, and boundary
- cellpose.dynamics.follow_flows(dP, mask=None, niter=200, interp=True, use_gpu=True, device=None)[source]
define pixels and run dynamics to recover masks in 2D
Pixels are meshgrid. Only pixels with non-zero cell-probability are used (as defined by inds)
- Parameters:
dP (float32, 3D or 4D array) – flows [axis x Ly x Lx] or [axis x Lz x Ly x Lx]
mask ((optional, default None)) – pixel mask to seed masks. Useful when flows have low magnitudes.
niter (int (optional, default 200)) – number of iterations of dynamics to run
interp (bool (optional, default True)) – interpolate during 2D dynamics (not available in 3D) (in previous versions + paper it was False)
use_gpu (bool (optional, default False)) – use GPU to run interpolated dynamics (faster than CPU)
- Returns:
p (float32, 3D or 4D array) – final locations of each pixel after dynamics; [axis x Ly x Lx] or [axis x Lz x Ly x Lx]
inds (int32, 3D or 4D array) – indices of pixels used for dynamics; [axis x Ly x Lx] or [axis x Lz x Ly x Lx]
- cellpose.dynamics.get_masks(p, iscell=None, rpad=20)[source]
create masks using pixel convergence after running dynamics
Makes a histogram of final pixel locations p, initializes masks at peaks of histogram and extends the masks from the peaks so that they include all pixels with more than 2 final pixels p. Discards masks with flow errors greater than the threshold. :param p: final locations of each pixel after dynamics,
size [axis x Ly x Lx] or [axis x Lz x Ly x Lx].
- Parameters:
iscell (bool, 2D or 3D array) – if iscell is not None, set pixels that are iscell False to stay in their original location.
rpad (int (optional, default 20)) – histogram edge padding
threshold (float (optional, default 0.4)) – masks with flow error greater than threshold are discarded (if flows is not None)
flows (float, 3D or 4D array (optional, default None)) – flows [axis x Ly x Lx] or [axis x Lz x Ly x Lx]. If flows is not None, then masks with inconsistent flows are removed using remove_bad_flow_masks.
- Returns:
M0 – masks with inconsistent flow masks removed, 0=NO masks; 1,2,…=mask labels, size [Ly x Lx] or [Lz x Ly x Lx]
- Return type:
int, 2D or 3D array
- cellpose.dynamics.labels_to_flows(labels, files=None, use_gpu=False, device=None, redo_flows=False)[source]
convert labels (list of masks or flows) to flows for training model
if files is not None, flows are saved to files to be reused
- Parameters:
labels (list of ND-arrays) – labels[k] can be 2D or 3D, if [3 x Ly x Lx] then it is assumed that flows were precomputed. Otherwise labels[k][0] or labels[k] (if 2D) is used to create flows and cell probabilities.
- Returns:
flows – flows[k][0] is labels[k], flows[k][1] is cell distance transform, flows[k][2] is Y flow, flows[k][3] is X flow, and flows[k][4] is heat distribution
- Return type:
list of [4 x Ly x Lx] arrays
- cellpose.dynamics.map_coordinates(I, yc, xc, Y)
bilinear interpolation of image ‘I’ in-place with ycoordinates yc and xcoordinates xc to Y
- Parameters:
I (C x Ly x Lx) –
yc (ni) – new y coordinates
xc (ni) – new x coordinates
Y (C x ni) – I sampled at (yc,xc)
- cellpose.dynamics.masks_to_flows(masks, use_gpu=False, device=None)[source]
convert masks to flows using diffusion from center pixel
Center of masks where diffusion starts is defined to be the closest pixel to the median of all pixels that is inside the mask. Result of diffusion is converted into flows by computing the gradients of the diffusion density map.
- Parameters:
masks (int, 2D or 3D array) – labelled masks 0=NO masks; 1,2,…=mask labels
- Returns:
mu (float, 3D or 4D array) – flows in Y = mu[-2], flows in X = mu[-1]. if masks are 3D, flows in Z = mu[0].
mu_c (float, 2D or 3D array) – for each pixel, the distance to the center of the mask in which it resides
- cellpose.dynamics.masks_to_flows_cpu(masks, device=None)[source]
convert masks to flows using diffusion from center pixel Center of masks where diffusion starts is defined to be the closest pixel to the median of all pixels that is inside the mask. Result of diffusion is converted into flows by computing the gradients of the diffusion density map. :param masks: labelled masks 0=NO masks; 1,2,…=mask labels :type masks: int, 2D array
- Returns:
mu (float, 3D array) – flows in Y = mu[-2], flows in X = mu[-1]. if masks are 3D, flows in Z = mu[0].
mu_c (float, 2D array) – for each pixel, the distance to the center of the mask in which it resides
- cellpose.dynamics.masks_to_flows_gpu(masks, device=None)[source]
convert masks to flows using diffusion from center pixel Center of masks where diffusion starts is defined using COM :param masks: labelled masks 0=NO masks; 1,2,…=mask labels :type masks: int, 2D or 3D array
- Returns:
mu (float, 3D or 4D array) – flows in Y = mu[-2], flows in X = mu[-1]. if masks are 3D, flows in Z = mu[0].
mu_c (float, 2D or 3D array) – for each pixel, the distance to the center of the mask in which it resides
- cellpose.dynamics.remove_bad_flow_masks(masks, flows, threshold=0.4, use_gpu=False, device=None)[source]
remove masks which have inconsistent flows
Uses metrics.flow_error to compute flows from predicted masks and compare flows to predicted flows from network. Discards masks with flow errors greater than the threshold.
- Parameters:
masks (int, 2D or 3D array) – labelled masks, 0=NO masks; 1,2,…=mask labels, size [Ly x Lx] or [Lz x Ly x Lx]
flows (float, 3D or 4D array) – flows [axis x Ly x Lx] or [axis x Lz x Ly x Lx]
threshold (float (optional, default 0.4)) – masks with flow error greater than threshold are discarded.
- Returns:
masks – masks with inconsistent flow masks removed, 0=NO masks; 1,2,…=mask labels, size [Ly x Lx] or [Lz x Ly x Lx]
- Return type:
int, 2D or 3D array
- cellpose.dynamics.steps2D(p, dP, inds, niter)
run dynamics of pixels to recover masks in 2D
Euler integration of dynamics dP for niter steps
- Parameters:
p (float32, 3D array) – pixel locations [axis x Ly x Lx] (start at initial meshgrid)
dP (float32, 3D array) – flows [axis x Ly x Lx]
inds (int32, 2D array) – non-zero pixels to run dynamics on [npixels x 2]
niter (int32) – number of iterations of dynamics to run
- Returns:
p – final locations of each pixel after dynamics
- Return type:
float32, 3D array
- cellpose.dynamics.steps3D(p, dP, inds, niter)
run dynamics of pixels to recover masks in 3D
Euler integration of dynamics dP for niter steps
- Parameters:
p (float32, 4D array) – pixel locations [axis x Lz x Ly x Lx] (start at initial meshgrid)
dP (float32, 4D array) – flows [axis x Lz x Ly x Lx]
inds (int32, 2D array) – non-zero pixels to run dynamics on [npixels x 3]
niter (int32) – number of iterations of dynamics to run
- Returns:
p – final locations of each pixel after dynamics
- Return type:
float32, 4D array
Image transforms
- cellpose.transforms.average_tiles(y, ysub, xsub, Ly, Lx)[source]
average results of network over tiles
- Parameters:
y (float, [ntiles x nclasses x bsize x bsize]) – output of cellpose network for each tile
ysub (list) – list of arrays with start and end of tiles in Y of length ntiles
xsub (list) – list of arrays with start and end of tiles in X of length ntiles
Ly (int) – size of pre-tiled image in Y (may be larger than original image if image size is less than bsize)
Lx (int) – size of pre-tiled image in X (may be larger than original image if image size is less than bsize)
- Returns:
yf – network output averaged over tiles
- Return type:
float32, [nclasses x Ly x Lx]
- cellpose.transforms.convert_image(x, channels, channel_axis=None, z_axis=None, do_3D=False, normalize=True, invert=False, nchan=2)[source]
return image with z first, channels last and normalized intensities
- cellpose.transforms.make_tiles(imgi, bsize=224, augment=False, tile_overlap=0.1)[source]
make tiles of image to run at test-time
- if augmented, tiles are flipped and tile_overlap=2.
original
flipped vertically
flipped horizontally
flipped vertically and horizontally
- Parameters:
imgi (float32) – array that’s nchan x Ly x Lx
bsize (float (optional, default 224)) – size of tiles
augment (bool (optional, default False)) – flip tiles and set tile_overlap=2.
tile_overlap (float (optional, default 0.1)) – fraction of overlap of tiles
- Returns:
IMG (float32) – array that’s ntiles x nchan x bsize x bsize
ysub (list) – list of arrays with start and end of tiles in Y of length ntiles
xsub (list) – list of arrays with start and end of tiles in X of length ntiles
- cellpose.transforms.move_axis(img, m_axis=-1, first=True)[source]
move axis m_axis to first or last position
- cellpose.transforms.move_min_dim(img, force=False)[source]
move minimum dimension last as channels if < 10, or force==True
- cellpose.transforms.normalize99(Y, lower=1, upper=99)[source]
normalize image so 0.0 is 1st percentile and 1.0 is 99th percentile
- cellpose.transforms.normalize_img(img, axis=-1, invert=False)[source]
normalize each channel of the image so that so that 0.0=1st percentile and 1.0=99th percentile of image intensities
optional inversion
- Parameters:
img (ND-array (at least 3 dimensions)) –
axis (channel axis to loop over for normalization) –
invert (invert image (useful if cells are dark instead of bright)) –
- Returns:
img – normalized image of same size
- Return type:
ND-array, float32
- cellpose.transforms.pad_image_ND(img0, div=16, extra=1)[source]
pad image for test-time so that its dimensions are a multiple of 16 (2D or 3D)
- Parameters:
img0 (ND-array) – image of size [nchan (x Lz) x Ly x Lx]
div (int (optional, default 16)) –
- Returns:
I (ND-array) – padded image
ysub (array, int) – yrange of pixels in I corresponding to img0
xsub (array, int) – xrange of pixels in I corresponding to img0
- cellpose.transforms.random_rotate_and_resize(X, Y=None, scale_range=1.0, xy=(224, 224), do_flip=True, rescale=None, unet=False, random_per_image=True)[source]
augmentation by random rotation and resizing X and Y are lists or arrays of length nimg, with dims channels x Ly x Lx (channels optional) :param X: list of image arrays of size [nchan x Ly x Lx] or [Ly x Lx] :type X: LIST of ND-arrays, float :param Y: list of image labels of size [nlabels x Ly x Lx] or [Ly x Lx]. The 1st channel
of Y is always nearest-neighbor interpolated (assumed to be masks or 0-1 representation). If Y.shape[0]==3 and not unet, then the labels are assumed to be [cell probability, Y flow, X flow]. If unet, second channel is dist_to_bound.
- Parameters:
scale_range (float (optional, default 1.0)) – Range of resizing of images for augmentation. Images are resized by (1-scale_range/2) + scale_range * np.random.rand()
xy (tuple, int (optional, default (224,224))) – size of transformed images to return
do_flip (bool (optional, default True)) – whether or not to flip images horizontally
rescale (array, float (optional, default None)) – how much to resize images by before performing augmentations
unet (bool (optional, default False)) –
random_per_image (bool (optional, default True)) – different random rotate and resize per image
- Returns:
imgi (ND-array, float) – transformed images in array [nimg x nchan x xy[0] x xy[1]]
lbl (ND-array, float) – transformed labels in array [nimg x nchan x xy[0] x xy[1]]
scale (array, float) – amount each image was resized by
- cellpose.transforms.reshape(data, channels=[0, 0], chan_first=False)[source]
reshape data using channels
- Parameters:
data (numpy array that's (Z x ) Ly x Lx x nchan) – if data.ndim==8 and data.shape[0]<8, assumed to be nchan x Ly x Lx
channels (list of int of length 2 (optional, default [0,0])) – First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to train on grayscale images, input [0,0]. To train on images with cells in green and nuclei in blue, input [2,3].
invert (bool) – invert intensities
- Returns:
data
- Return type:
numpy array that’s (Z x ) Ly x Lx x nchan (if chan_first==False)
- cellpose.transforms.reshape_and_normalize_data(train_data, test_data=None, channels=None, normalize=True)[source]
inputs converted to correct shapes for training and rescaled so that 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
- Parameters:
train_data (list of ND-arrays, float) – list of training images of size [Ly x Lx], [nchan x Ly x Lx], or [Ly x Lx x nchan]
test_data (list of ND-arrays, float (optional, default None)) – list of testing images of size [Ly x Lx], [nchan x Ly x Lx], or [Ly x Lx x nchan]
channels (list of int of length 2 (optional, default None)) – First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to train on grayscale images, input [0,0]. To train on images with cells in green and nuclei in blue, input [2,3].
normalize (bool (optional, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
- Returns:
train_data (list of ND-arrays, float) – list of training images of size [2 x Ly x Lx]
test_data (list of ND-arrays, float (optional, default None)) – list of testing images of size [2 x Ly x Lx]
run_test (bool) – whether or not test_data was correct size and is useable during training
- cellpose.transforms.reshape_train_test(train_data, train_labels, test_data, test_labels, channels, normalize=True)[source]
check sizes and reshape train and test data for training
- cellpose.transforms.resize_image(img0, Ly=None, Lx=None, rsz=None, interpolation=cv2.INTER_LINEAR, no_channels=False)[source]
resize image for computing flows / unresize for computing dynamics
- Parameters:
img0 (ND-array) – image of size [Y x X x nchan] or [Lz x Y x X x nchan] or [Lz x Y x X]
Ly (int, optional) –
Lx (int, optional) –
rsz (float, optional) – resize coefficient(s) for image; if Ly is None then rsz is used
interpolation (cv2 interp method (optional, default cv2.INTER_LINEAR)) –
- Returns:
imgs – image of size [Ly x Lx x nchan] or [Lz x Ly x Lx x nchan]
- Return type:
ND-array
- cellpose.transforms.unaugment_tiles(y, unet=False)[source]
reverse test-time augmentations for averaging
- Parameters:
y (float32) – array that’s ntiles_y x ntiles_x x chan x Ly x Lx where chan = (dY, dX, cell prob)
unet (bool (optional, False)) – whether or not unet output or cellpose output
- Returns:
y
- Return type:
float32
Plot functions
- cellpose.plot.dx_to_circ(dP, transparency=False, mask=None)[source]
dP is 2 x Y x X => ‘optic’ flow representation
- Parameters:
dP (2xLyxLx array) – Flow field components [dy,dx]
transparency (bool, default False) – magnitude of flow controls opacity, not lightness (clear background)
mask (2D array) – Multiplies each RGB component to suppress noise
- cellpose.plot.image_to_rgb(img0, channels=[0, 0])[source]
image is 2 x Ly x Lx or Ly x Lx x 2 - change to RGB Ly x Lx x 3
- cellpose.plot.interesting_patch(mask, bsize=130)[source]
get patch of size bsize x bsize with most masks
- cellpose.plot.mask_overlay(img, masks, colors=None)[source]
overlay masks on image (set image to grayscale)
- Parameters:
img (int or float, 2D or 3D array) – img is of size [Ly x Lx (x nchan)]
masks (int, 2D array) – masks where 0=NO masks; 1,2,…=mask labels
colors (int, 2D array (optional, default None)) – size [nmasks x 3], each entry is a color in 0-255 range
- Returns:
RGB – array of masks overlaid on grayscale image
- Return type:
uint8, 3D array
- cellpose.plot.mask_rgb(masks, colors=None)[source]
masks in random rgb colors
- Parameters:
masks (int, 2D array) – masks where 0=NO masks; 1,2,…=mask labels
colors (int, 2D array (optional, default None)) – size [nmasks x 3], each entry is a color in 0-255 range
- Returns:
RGB – array of masks overlaid on grayscale image
- Return type:
uint8, 3D array
- cellpose.plot.outline_view(img0, maski, color=[1, 0, 0], mode='inner')[source]
Generates a red outline overlay onto image.
- cellpose.plot.show_segmentation(fig, img, maski, flowi, channels=[0, 0], file_name=None)[source]
plot segmentation results (like on website)
Can save each panel of figure with file_name option. Use channels option if img input is not an RGB image with 3 channels.
- Parameters:
fig (matplotlib.pyplot.figure) – figure in which to make plot
img (2D or 3D array) – image input into cellpose
maski (int, 2D array) – for image k, masks[k] output from Cellpose.eval, where 0=NO masks; 1,2,…=mask labels
flowi (int, 2D array) – for image k, flows[k][0] output from Cellpose.eval (RGB of flows)
channels (list of int (optional, default [0,0])) – channels used to run Cellpose, no need to use if image is RGB
file_name (str (optional, default None)) – file name of image, if file_name is not None, figure panels are saved
seg_norm (bool (optional, default False)) – improve cell visibility under labels
I/O functions
- cellpose.io.add_model(filename)[source]
add model to .cellpose models folder to use with GUI or CLI
- cellpose.io.get_image_files(folder, mask_filter, imf=None, look_one_level_down=False)[source]
find all images in a folder and if look_one_level_down all subfolders
- cellpose.io.masks_flows_to_seg(images, masks, flows, diams, file_names, channels=None)[source]
save output of model eval to be loaded in GUI
can be list output (run on multiple images) or single output (run on single image)
saved to file_names[k]+’_seg.npy’
- Parameters:
images ((list of) 2D or 3D arrays) – images input into cellpose
masks ((list of) 2D arrays, int) – masks output from Cellpose.eval, where 0=NO masks; 1,2,…=mask labels
flows ((list of) list of ND arrays) – flows output from Cellpose.eval
diams (float array) – diameters used to run Cellpose
file_names ((list of) str) – names of files of images
channels (list of int (optional, default None)) – channels used to run Cellpose
- cellpose.io.remove_model(filename, delete=False)[source]
remove model from .cellpose custom model list
- cellpose.io.save_masks(images, masks, flows, file_names, png=True, tif=False, channels=[0, 0], suffix='', save_flows=False, save_outlines=False, save_ncolor=False, dir_above=False, in_folders=False, savedir=None, save_txt=True)[source]
save masks + nicely plotted segmentation image to png and/or tiff
if png, masks[k] for images[k] are saved to file_names[k]+’_cp_masks.png’
if tif, masks[k] for images[k] are saved to file_names[k]+’_cp_masks.tif’
if png and matplotlib installed, full segmentation figure is saved to file_names[k]+’_cp.png’
only tif option works for 3D data, and only tif option works for empty masks
- Parameters:
images ((list of) 2D, 3D or 4D arrays) – images input into cellpose
masks ((list of) 2D arrays, int) – masks output from Cellpose.eval, where 0=NO masks; 1,2,…=mask labels
flows ((list of) list of ND arrays) – flows output from Cellpose.eval
file_names ((list of) str) – names of files of images
savedir (str) – absolute path where images will be saved. Default is none (saves to image directory)
save_flows (bool) – Can choose which outputs/views to save. ncolor is a 4 (or 5, if 4 takes too long) index version of the labels that is way easier to visualize than having hundreds of unique colors that may be similar and touch. Any color map can be applied to it (0,1,2,3,4,…).
save_outlines (bool) – Can choose which outputs/views to save. ncolor is a 4 (or 5, if 4 takes too long) index version of the labels that is way easier to visualize than having hundreds of unique colors that may be similar and touch. Any color map can be applied to it (0,1,2,3,4,…).
save_ncolor (bool) – Can choose which outputs/views to save. ncolor is a 4 (or 5, if 4 takes too long) index version of the labels that is way easier to visualize than having hundreds of unique colors that may be similar and touch. Any color map can be applied to it (0,1,2,3,4,…).
save_txt (bool) – Can choose which outputs/views to save. ncolor is a 4 (or 5, if 4 takes too long) index version of the labels that is way easier to visualize than having hundreds of unique colors that may be similar and touch. Any color map can be applied to it (0,1,2,3,4,…).
- cellpose.io.save_rois(masks, file_name)[source]
save masks to .roi files in .zip archive for ImageJ/Fiji
- Parameters:
masks (2D array, int) – masks output from Cellpose.eval, where 0=NO masks; 1,2,…=mask labels
file_name (str) – name to save the .zip file to
------- –
Utils functions
- class cellpose.utils.TqdmToLogger(logger, level=None)[source]
Output stream for TQDM which will output to logger module instead of the StdOut.
- cellpose.utils.circleMask(d0)[source]
creates array with indices which are the radius of that x,y point inputs:
d0 (patch of (-d0,d0+1) over which radius computed
- outputs:
rs: array (2*d0+1,2*d0+1) of radii dx,dy: indices of patch
- cellpose.utils.distance_to_boundary(masks)[source]
get distance to boundary of mask pixels
- Parameters:
masks (int, 2D or 3D array) – size [Ly x Lx] or [Lz x Ly x Lx], 0=NO masks; 1,2,…=mask labels
- Returns:
dist_to_bound – size [Ly x Lx] or [Lz x Ly x Lx]
- Return type:
2D or 3D array
- cellpose.utils.download_url_to_file(url, dst, progress=True)[source]
- Download object at the given URL to a local path.
Thanks to torch, slightly modified
- Parameters:
url (string) – URL of the object to download
dst (string) – Full path where object will be saved, e.g. /tmp/temporary_file
progress (bool, optional) – whether or not to display a progress bar to stderr Default: True
- cellpose.utils.fill_holes_and_remove_small_masks(masks, min_size=15)[source]
fill holes in masks (2D/3D) and discard masks smaller than min_size (2D)
fill holes in each mask using scipy.ndimage.morphology.binary_fill_holes
(might have issues at borders between cells, todo: check and fix)
- Parameters:
masks (int, 2D or 3D array) – labelled masks, 0=NO masks; 1,2,…=mask labels, size [Ly x Lx] or [Lz x Ly x Lx]
min_size (int (optional, default 15)) – minimum number of pixels per mask, can turn off with -1
- Returns:
masks – masks with holes filled and masks smaller than min_size removed, 0=NO masks; 1,2,…=mask labels, size [Ly x Lx] or [Lz x Ly x Lx]
- Return type:
int, 2D or 3D array
- cellpose.utils.get_masks_unet(output, cell_threshold=0, boundary_threshold=0)[source]
create masks using cell probability and cell boundary
- cellpose.utils.masks_to_edges(masks, threshold=1.0)[source]
get edges of masks as a 0-1 array
- Parameters:
masks (int, 2D or 3D array) – size [Ly x Lx] or [Lz x Ly x Lx], 0=NO masks; 1,2,…=mask labels
- Returns:
edges – size [Ly x Lx] or [Lz x Ly x Lx], True pixels are edge pixels
- Return type:
2D or 3D array
- cellpose.utils.masks_to_outlines(masks)[source]
get outlines of masks as a 0-1 array
- Parameters:
masks (int, 2D or 3D array) – size [Ly x Lx] or [Lz x Ly x Lx], 0=NO masks; 1,2,…=mask labels
- Returns:
outlines – size [Ly x Lx] or [Lz x Ly x Lx], True pixels are outlines
- Return type:
2D or 3D array
- cellpose.utils.outlines_list(masks, multiprocessing=True)[source]
get outlines of masks as a list to loop over for plotting This function is a wrapper for outlines_list_single and outlines_list_multi
- cellpose.utils.outlines_list_multi(masks, num_processes=None)[source]
get outlines of masks as a list to loop over for plotting
- cellpose.utils.outlines_list_single(masks)[source]
get outlines of masks as a list to loop over for plotting
- cellpose.utils.remove_edge_masks(masks, change_index=True)[source]
remove masks with pixels on edge of image
- Parameters:
masks (int, 2D or 3D array) – size [Ly x Lx] or [Lz x Ly x Lx], 0=NO masks; 1,2,…=mask labels
change_index (bool (optional, default True)) – if True, after removing masks change indexing so no missing label numbers
- Returns:
outlines – size [Ly x Lx] or [Lz x Ly x Lx], 0=NO masks; 1,2,…=mask labels
- Return type:
2D or 3D array
Network classes
Core functions
All models functions
- class cellpose.models.Cellpose(gpu=False, model_type='cyto', net_avg=False, device=None)[source]
main model which combines SizeModel and CellposeModel
- Parameters:
gpu (bool (optional, default False)) – whether or not to use GPU, will check if GPU available
model_type (str (optional, default 'cyto')) – ‘cyto’=cytoplasm model; ‘nuclei’=nucleus model; ‘cyto2’=cytoplasm model with additional user images
net_avg (bool (optional, default False)) – loads the 4 built-in networks and averages them if True, loads one network if False
device (torch device (optional, default None)) – device used for model running / training (torch.device(‘cuda’) or torch.device(‘cpu’)), overrides gpu input, recommended if you want to use a specific GPU (e.g. torch.device(‘cuda:1’))
- eval(x, batch_size=8, channels=None, channel_axis=None, z_axis=None, invert=False, normalize=True, diameter=30.0, do_3D=False, anisotropy=None, net_avg=False, augment=False, tile=True, tile_overlap=0.1, resample=True, interp=True, flow_threshold=0.4, cellprob_threshold=0.0, min_size=15, stitch_threshold=0.0, rescale=None, progress=None, model_loaded=False)[source]
run cellpose and get masks
- Parameters:
x (list or array of images) – can be list of 2D/3D images, or array of 2D/3D images, or 4D image array
batch_size (int (optional, default 8)) – number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)
channels (list (optional, default None)) – list of channels, either of length 2 or of length number of images by 2. First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to segment grayscale images, input [0,0]. To segment images with cells in green and nuclei in blue, input [2,3]. To segment one grayscale image and one image with cells in green and nuclei in blue, input [[0,0], [2,3]].
channel_axis (int (optional, default None)) – if None, channels dimension is attempted to be automatically determined
z_axis (int (optional, default None)) – if None, z dimension is attempted to be automatically determined
invert (bool (optional, default False)) – invert image pixel intensity before running network (if True, image is also normalized)
normalize (bool (optional, default True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
diameter (float (optional, default 30.)) – if set to None, then diameter is automatically estimated if size model is loaded
do_3D (bool (optional, default False)) – set to True to run 3D segmentation on 4D image input
anisotropy (float (optional, default None)) – for 3D segmentation, optional rescaling factor (e.g. set to 2.0 if Z is sampled half as dense as X or Y)
net_avg (bool (optional, default False)) – runs the 4 built-in networks and averages them if True, runs one network if False
augment (bool (optional, default False)) – tiles image with overlapping tiles and flips overlapped regions to augment
tile (bool (optional, default True)) – tiles image to ensure GPU/CPU memory usage limited (recommended)
tile_overlap (float (optional, default 0.1)) – fraction of overlap of tiles when computing flows
resample (bool (optional, default True)) – run dynamics at original image size (will be slower but create more accurate boundaries)
interp (bool (optional, default True)) – interpolate during 2D dynamics (not available in 3D) (in previous versions it was False)
flow_threshold (float (optional, default 0.4)) – flow error threshold (all cells with errors below threshold are kept) (not used for 3D)
cellprob_threshold (float (optional, default 0.0)) – all pixels with value above threshold kept for masks, decrease to find more and larger masks
min_size (int (optional, default 15)) – minimum number of pixels per mask, can turn off with -1
stitch_threshold (float (optional, default 0.0)) – if stitch_threshold>0.0 and not do_3D and equal image sizes, masks are stitched in 3D to return volume segmentation
rescale (float (optional, default None)) – if diameter is set to None, and rescale is not None, then rescale is used instead of diameter for resizing image
progress (pyqt progress bar (optional, default None)) – to return progress bar status to GUI
model_loaded (bool (optional, default False)) – internal variable for determining if model has been loaded, used in __main__.py
- Returns:
masks (list of 2D arrays, or single 3D array (if do_3D=True)) – labelled image, where 0=no masks; 1,2,…=mask labels
flows (list of lists 2D arrays, or list of 3D arrays (if do_3D=True)) – flows[k][0] = XY flow in HSV 0-255 flows[k][1] = XY flows at each pixel flows[k][2] = cell probability (if > cellprob_threshold, pixel used for dynamics) flows[k][3] = final pixel locations after Euler integration
styles (list of 1D arrays of length 256, or single 1D array (if do_3D=True)) – style vector summarizing each image, also used to estimate size of objects in image
diams (list of diameters, or float (if do_3D=True))
- class cellpose.models.CellposeModel(gpu=False, pretrained_model=False, model_type=None, net_avg=False, diam_mean=30.0, device=None, residual_on=True, style_on=True, concatenation=False, nchan=2)[source]
- Parameters:
gpu (bool (optional, default False)) – whether or not to save model to GPU, will check if GPU available
pretrained_model (str or list of strings (optional, default False)) – full path to pretrained cellpose model(s), if None or False, no model loaded
model_type (str (optional, default None)) – any model that is available in the GUI, use name in GUI e.g. ‘livecell’ (can be user-trained or model zoo)
net_avg (bool (optional, default False)) – loads the 4 built-in networks and averages them if True, loads one network if False
diam_mean (float (optional, default 30.)) – mean ‘diameter’, 30. is built in value for ‘cyto’ model; 17. is built in value for ‘nuclei’ model; if saved in custom model file (cellpose>=2.0) then it will be loaded automatically and overwrite this value
device (torch device (optional, default None)) – device used for model running / training (torch.device(‘cuda’) or torch.device(‘cpu’)), overrides gpu input, recommended if you want to use a specific GPU (e.g. torch.device(‘cuda:1’))
residual_on (bool (optional, default True)) – use 4 conv blocks with skip connections per layer instead of 2 conv blocks like conventional u-nets
style_on (bool (optional, default True)) – use skip connections from style vector to all upsampling layers
concatenation (bool (optional, default False)) – if True, concatentate downsampling block outputs with upsampling block inputs; default is to add
nchan (int (optional, default 2)) – number of channels to use as input to network, default is 2 (cyto + nuclei) or (nuclei + zeros)
- eval(x, batch_size=8, channels=None, channel_axis=None, z_axis=None, normalize=True, invert=False, rescale=None, diameter=None, do_3D=False, anisotropy=None, net_avg=False, augment=False, tile=True, tile_overlap=0.1, resample=True, interp=True, flow_threshold=0.4, cellprob_threshold=0.0, compute_masks=True, min_size=15, stitch_threshold=0.0, progress=None, loop_run=False, model_loaded=False)[source]
segment list of images x, or 4D array - Z x nchan x Y x X
- Parameters:
x (list or array of images) – can be list of 2D/3D/4D images, or array of 2D/3D/4D images
batch_size (int (optional, default 8)) – number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)
channels (list (optional, default None)) – list of channels, either of length 2 or of length number of images by 2. First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to segment grayscale images, input [0,0]. To segment images with cells in green and nuclei in blue, input [2,3]. To segment one grayscale image and one image with cells in green and nuclei in blue, input [[0,0], [2,3]].
channel_axis (int (optional, default None)) – if None, channels dimension is attempted to be automatically determined
z_axis (int (optional, default None)) – if None, z dimension is attempted to be automatically determined
normalize (bool (default, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
invert (bool (optional, default False)) – invert image pixel intensity before running network
diameter (float (optional, default None)) – diameter for each image, if diameter is None, set to diam_mean or diam_train if available
rescale (float (optional, default None)) – resize factor for each image, if None, set to 1.0; (only used if diameter is None)
do_3D (bool (optional, default False)) – set to True to run 3D segmentation on 4D image input
anisotropy (float (optional, default None)) – for 3D segmentation, optional rescaling factor (e.g. set to 2.0 if Z is sampled half as dense as X or Y)
net_avg (bool (optional, default False)) – runs the 4 built-in networks and averages them if True, runs one network if False
augment (bool (optional, default False)) – tiles image with overlapping tiles and flips overlapped regions to augment
tile (bool (optional, default True)) – tiles image to ensure GPU/CPU memory usage limited (recommended)
tile_overlap (float (optional, default 0.1)) – fraction of overlap of tiles when computing flows
resample (bool (optional, default True)) – run dynamics at original image size (will be slower but create more accurate boundaries)
interp (bool (optional, default True)) – interpolate during 2D dynamics (not available in 3D) (in previous versions it was False)
flow_threshold (float (optional, default 0.4)) – flow error threshold (all cells with errors below threshold are kept) (not used for 3D)
cellprob_threshold (float (optional, default 0.0)) – all pixels with value above threshold kept for masks, decrease to find more and larger masks
compute_masks (bool (optional, default True)) – Whether or not to compute dynamics and return masks. This is set to False when retrieving the styles for the size model.
min_size (int (optional, default 15)) – minimum number of pixels per mask, can turn off with -1
stitch_threshold (float (optional, default 0.0)) – if stitch_threshold>0.0 and not do_3D, masks are stitched in 3D to return volume segmentation
progress (pyqt progress bar (optional, default None)) – to return progress bar status to GUI
loop_run (bool (optional, default False)) – internal variable for determining if model has been loaded, stops model loading in loop over images
model_loaded (bool (optional, default False)) – internal variable for determining if model has been loaded, used in __main__.py
- Returns:
masks (list of 2D arrays, or single 3D array (if do_3D=True)) – labelled image, where 0=no masks; 1,2,…=mask labels
flows (list of lists 2D arrays, or list of 3D arrays (if do_3D=True)) – flows[k][0] = XY flow in HSV 0-255 flows[k][1] = XY flows at each pixel flows[k][2] = cell probability (if > cellprob_threshold, pixel used for dynamics) flows[k][3] = final pixel locations after Euler integration
styles (list of 1D arrays of length 64, or single 1D array (if do_3D=True)) – style vector summarizing each image, also used to estimate size of objects in image
- train(train_data, train_labels, train_files=None, test_data=None, test_labels=None, test_files=None, channels=None, normalize=True, save_path=None, save_every=100, save_each=False, learning_rate=0.2, n_epochs=500, momentum=0.9, SGD=True, weight_decay=1e-05, batch_size=8, nimg_per_epoch=None, rescale=True, min_train_masks=5, model_name=None)[source]
train network with images train_data
- Parameters:
train_data (list of arrays (2D or 3D)) – images for training
train_labels (list of arrays (2D or 3D)) – labels for train_data, where 0=no masks; 1,2,…=mask labels can include flows as additional images
train_files (list of strings) – file names for images in train_data (to save flows for future runs)
test_data (list of arrays (2D or 3D)) – images for testing
test_labels (list of arrays (2D or 3D)) – labels for test_data, where 0=no masks; 1,2,…=mask labels; can include flows as additional images
test_files (list of strings) – file names for images in test_data (to save flows for future runs)
channels (list of ints (default, None)) – channels to use for training
normalize (bool (default, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
save_path (string (default, None)) – where to save trained model, if None it is not saved
save_every (int (default, 100)) – save network every [save_every] epochs
learning_rate (float or list/np.ndarray (default, 0.2)) – learning rate for training, if list, must be same length as n_epochs
n_epochs (int (default, 500)) – how many times to go through whole training set during training
weight_decay (float (default, 0.00001)) –
SGD (bool (default, True)) – use SGD as optimization instead of RAdam
batch_size (int (optional, default 8)) – number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)
nimg_per_epoch (int (optional, default None)) – minimum number of images to train on per epoch, with a small training set (< 8 images) it may help to set to 8
rescale (bool (default, True)) – whether or not to rescale images to diam_mean during training, if True it assumes you will fit a size model after training or resize your images accordingly, if False it will try to train the model to be scale-invariant (works worse)
min_train_masks (int (default, 5)) – minimum number of masks an image must have to use in training set
model_name (str (default, None)) – name of network, otherwise saved with name as params + training start time
- class cellpose.models.SizeModel(cp_model, device=None, pretrained_size=None, **kwargs)[source]
linear regression model for determining the size of objects in image used to rescale before input to cp_model uses styles from cp_model
- Parameters:
cp_model (UnetModel or CellposeModel) – model from which to get styles
device (torch device (optional, default None)) – device used for model running / training (torch.device(‘cuda’) or torch.device(‘cpu’)), overrides gpu input, recommended if you want to use a specific GPU (e.g. torch.device(‘cuda:1’))
pretrained_size (str) – path to pretrained size model
- eval(x, channels=None, channel_axis=None, normalize=True, invert=False, augment=False, tile=True, batch_size=8, progress=None, interp=True)[source]
use images x to produce style or use style input to predict size of objects in image
Object size estimation is done in two steps: 1. use a linear regression model to predict size from style in image 2. resize image to predicted size and run CellposeModel to get output masks.
Take the median object size of the predicted masks as the final predicted size.
- Parameters:
x (list or array of images) – can be list of 2D/3D images, or array of 2D/3D images
channels (list (optional, default None)) – list of channels, either of length 2 or of length number of images by 2. First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to segment grayscale images, input [0,0]. To segment images with cells in green and nuclei in blue, input [2,3]. To segment one grayscale image and one image with cells in green and nuclei in blue, input [[0,0], [2,3]].
channel_axis (int (optional, default None)) – if None, channels dimension is attempted to be automatically determined
normalize (bool (default, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
invert (bool (optional, default False)) – invert image pixel intensity before running network
augment (bool (optional, default False)) – tiles image with overlapping tiles and flips overlapped regions to augment
tile (bool (optional, default True)) – tiles image to ensure GPU/CPU memory usage limited (recommended)
progress (pyqt progress bar (optional, default None)) – to return progress bar status to GUI
- Returns:
diam (array, float) – final estimated diameters from images x or styles style after running both steps
diam_style (array, float) – estimated diameters from style alone
- train(train_data, train_labels, test_data=None, test_labels=None, channels=None, normalize=True, learning_rate=0.2, n_epochs=10, l2_regularization=1.0, batch_size=8)[source]
train size model with images train_data to estimate linear model from styles to diameters
- Parameters:
train_data (list of arrays (2D or 3D)) – images for training
train_labels (list of arrays (2D or 3D)) – labels for train_data, where 0=no masks; 1,2,…=mask labels can include flows as additional images
channels (list of ints (default, None)) – channels to use for training
normalize (bool (default, True)) – normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel
n_epochs (int (default, 10)) – how many times to go through whole training set (taking random patches) for styles for diameter estimation
l2_regularization (float (default, 1.0)) – regularize linear model from styles to diameters
batch_size (int (optional, default 8)) – number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)