fnet package

Submodules

fnet.fnet_ensemble module

class fnet.fnet_ensemble.FnetEnsemble(paths_model: Union[str, List[str]])[source]

Bases: fnet.fnet_model.Model

Ensemble of FnetModels.

Parameters

paths_model – Path to a directory of saved models or a list of paths to saved models.

paths_model

Paths to saved models in the ensemble.

Type

Union[str, List[str]]

gpu_ids

GPU(s) used for prediction tasks.

Type

List[int]

load_state(state: dict, no_optim: bool = False)[source]
predict(x: Union[torch.Tensor, numpy.ndarray], tta: bool = False) → torch.Tensor[source]

Performs model prediction.

Parameters
  • x – Batched input.

  • tta – Set to to use test-time augmentation.

Returns

Model prediction.

Return type

torch.Tensor

save(path_save: str)[source]

Saves model to disk.

Parameters

path_save – Filename to which model is saved.

to_gpu(gpu_ids: Union[int, list]) → None[source]

Move network to specified GPU(s).

Parameters

gpu_ids – GPU(s) on which to perform training or prediction.

fnet.fnet_model module

Module to define main fnet model wrapper class.

class fnet.fnet_model.Model(betas=(0.5, 0.999), criterion_class='fnet.losses.WeightedMSE', init_weights=True, lr=0.001, nn_class='fnet.nn_modules.fnet_nn_3d.Net', nn_kwargs={}, scheduler=None, weight_decay=0, gpu_ids=-1)[source]

Bases: object

Class that encompasses a pytorch network and its optimizer.

apply_on_single_zstack(input_img: Optional[numpy.ndarray] = None, filename: Union[pathlib.Path, str, None] = None, inputCh: Optional[int] = None, normalization: Optional[Callable] = None, already_normalized: bool = False, ResizeRatio: Optional[Sequence[float]] = None, cutoff: Optional[float] = None) → numpy.ndarray[source]

Applies model to a single z-stack input.

This assumes the loaded network architecture can receive 3d grayscale images as input.

Parameters
  • input_img – 3d or 4d image with shape (Z, Y, X) or (C, Z, Y, X) respectively.

  • filename – Path to input image. Ignored if input_img is supplied.

  • inputCh – Selected channel if filename is a path to a 4d image.

  • normalization – Input image normalization function.

  • already_normalized – Set to skip input normalization.

  • ResizeRatio – Resizes each dimension of the the input image by the specified factor if specified.

  • cutoff – If specified, converts the output to a binary image with cutoff as threshold value.

Returns

Predicted image with shape (Z, Y, X). If cutoff is set, dtype will be numpy.uint8. Otherwise, dtype will be numpy.float.

Return type

np.ndarray

Raises
  • ValueError – If parameters are invalid.

  • FileNotFoundError – If specified file does not exist.

  • IndexError – If inputCh is invalid.

evaluate(x: torch.Tensor, y: torch.Tensor, metric: Optional = None, piecewise: bool = False, **kwargs) → Tuple[float, torch.Tensor][source]

Evaluates model output using a metric function.

Parameters
  • x – Input data.

  • y – Target data.

  • metric – Metric function. If None, uses fnet.metrics.corr_coef.

  • piecewise – Set to perform predictions piecewise.

  • **kwargs – Additional kwargs to be passed to predict() method.

Returns

  • float – Evaluation as determined by metric function.

  • torch.Tensor – Model prediction.

get_state()[source]
load_state(state: dict, no_optim: bool = False)[source]
predict(x: Union[torch.Tensor, numpy.ndarray], tta: bool = False) → torch.Tensor[source]

Performs model prediction on a single example.

Parameters
  • x – Input data.

  • piecewise – Set to perform piecewise predictions. i.e., predict on patches of the input and stitch together the predictions.

  • tta – Set to use test-time augmentation.

Returns

Model prediction.

Return type

torch.Tensor

predict_on_batch(x_batch: torch.Tensor) → torch.Tensor[source]

Performs model prediction on a batch of data.

Parameters

x_batch – Batch of input data.

Returns

Batch of model predictions.

Return type

torch.Tensor

predict_piecewise(x: Union[torch.Tensor, numpy.ndarray], **predict_kwargs) → torch.Tensor[source]

Performs model prediction piecewise on a single example.

Predicts on patches of the input and stitchs together the predictions.

Parameters
  • x – Input data.

  • **predict_kwargs – Kwargs to pass to predict method.

Returns

Model prediction.

Return type

torch.Tensor

save(path_save: str)[source]

Saves model to disk.

Parameters

path_save – Filename to which model is saved.

test_on_batch(x_batch: torch.Tensor, y_batch: torch.Tensor, weight_map_batch: Optional[torch.Tensor] = None) → float[source]

Test model on a batch of inputs and targets.

Parameters
  • x_batch – Batched input.

  • y_batch – Batched target.

  • weight_map_batch – Optional batched weight map.

Returns

Loss as evaluated by self.criterion.

Return type

float

test_on_iterator(iterator: Iterator, **kwargs: dict) → float[source]

Test model on iterator which has items to be passed to test_on_batch.

Parameters
  • iterator – Iterator that generates items to be passed to test_on_batch.

  • kwargs – Additional keyword arguments to be passed to test_on_batch.

Returns

Mean loss for items in iterable.

Return type

float

to_gpu(gpu_ids: Union[int, List[int]]) → None[source]

Move network to specified GPU(s).

Parameters

gpu_ids – GPU(s) on which to perform training or prediction.

train_on_batch(x_batch: torch.Tensor, y_batch: torch.Tensor, weight_map_batch: Optional[torch.Tensor] = None) → float[source]

Update model using a batch of inputs and targets.

Parameters
  • x_batch – Batched input.

  • y_batch – Batched target.

  • weight_map_batch – Optional batched weight map.

Returns

Loss as determined by self.criterion.

Return type

float

fnet.fnet_model.get_per_param_options(module, wd)[source]

Returns list of per parameter group options.

Applies the specified weight decay (wd) to parameters except parameters within batch norm layers and bias parameters.

fnet.fnetlogger module

class fnet.fnetlogger.FnetLogger(path_csv=None, columns=None)[source]

Bases: object

Log values in a dict of lists.

add(entry)[source]
to_csv(path_csv)[source]

fnet.losses module

Loss functions for fnet models.

class fnet.losses.HeteroscedasticLoss[source]

Bases: torch.nn.modules.module.Module

Loss function to capture heteroscedastic aleatoric uncertainty.

forward(y_hat_batch: torch.Tensor, y_batch: torch.Tensor)[source]

Calculates loss.

Parameters
  • y_hat_batch – Batched, 2-channel model output.

  • y_batch – Batched, 1-channel target output.

class fnet.losses.WeightedMSE[source]

Bases: torch.nn.modules.module.Module

Criterion for weighted mean-squared error.

forward(y_hat_batch: torch.Tensor, y_batch: torch.Tensor, weight_map_batch: Optional[torch.Tensor] = None)[source]

Calculates weighted MSE.

Parameters
  • y_hat_batch – Batched prediction.

  • y_batch – Batched target.

  • weight_map_batch – Optional weight map.

fnet.metrics module

Model evaluation metrics.

fnet.metrics.corr_coef(a: Union[numpy.ndarray, torch.Tensor], b: Union[numpy.ndarray, torch.Tensor]) → float[source]

Calculates the Pearson correlation coefficient between the inputs.

Parameters
  • a – First input.

  • b – Second input.

Returns

Pearson correlation coefficient between the inputs.

Return type

float

fnet.metrics.corr_coef_chan0(a: Union[numpy.ndarray, torch.Tensor], b: Union[numpy.ndarray, torch.Tensor]) → float[source]

Calculates the Pearson correlation coefficient between channel 0 of the inputs.

Assumes the first dimension of the inputs is the channel dimension.

Parameters
  • a – First input.

  • b – Second input.

Returns

Pearson correlation coefficient between channel 0 of the inputs.

Return type

float

fnet.models module

fnet.models.create_ensemble(paths_model: Union[str, List[str]], path_save_dir: str) → None[source]

Create and save an ensemble model.

Parameters
  • paths_model – Paths to models or model directories. Paths can be specified as items in list or as a string with paths separated by spaces. Any model specified as a directory assumed to be at ‘directory/model.p’.

  • path_save_dir – Model save path directory. Model will be saved at in path_save_dir as ‘model.p’.

fnet.models.load_model(path_model: str, no_optim: bool = False, checkpoint: Optional[str] = None, path_options: Optional[str] = None) → fnet.fnet_model.Model[source]

Loaded saved FnetModel.

Parameters
  • path_model – Path to model as a directory or .p file.

  • no_optim – Set to not the model optimizer.

  • checkpoint – Optional string that identifies a model checkpoint

  • path_options – Path to training options json. For legacy saved models where the FnetModel class/kwargs are not not included in the model save file.

Returns

Loaded model.

Return type

Model

fnet.models.load_or_init_model(path_model: str, path_options: str)[source]

Loaded saved model if it exists otherwise inititialize new model.

Parameters
  • path_model – Path to saved model.

  • path_options – Path to json where model training options are saved.

Returns

Loaded or new FnetModel instance.

Return type

FnetModel

fnet.predict_piecewise module

fnet.predict_piecewise.predict_piecewise(predictor, tensor_in: torch.Tensor, dims_max: Union[int, List[int]] = 64, overlaps: Union[int, List[int]] = 0, **predict_kwargs) → torch.Tensor[source]

Performs piecewise prediction and combines results.

Parameters
  • predictor – An object with a predict() method.

  • tensor_in – Tensor to be input into predictor piecewise. Should be 3d or 4d with with the first dimension channel.

  • dims_max – Specifies dimensions of each sub prediction.

  • overlaps – Specifies overlap along each dimension for sub predictions.

  • **predict_kwargs – Kwargs to pass to predict method.

Returns

Prediction with size tensor_in.size().

Return type

torch.Tensor

fnet.transforms module

class fnet.transforms.Capper(low=None, hi=None)[source]

Bases: object

class fnet.transforms.Cropper(cropping='-', by=16, offset='mid', n_max_pixels=9732096, dims_no_crop=None)[source]

Bases: object

undo_last(x_in)[source]

Pads input with zeros so its dimensions matches dimensions of last input to __call__.

class fnet.transforms.Normalize(per_dim=None)[source]

Bases: object

class fnet.transforms.Padder(padding='+', by=16, mode='constant')[source]

Bases: object

undo_last(x_in)[source]

Crops input so its dimensions matches dimensions of last input to __call__.

class fnet.transforms.Propper(action='-', **kwargs)[source]

Bases: object

Padder + Cropper

undo_last(x_in)[source]
class fnet.transforms.Resizer(factors, per_dim=None)[source]

Bases: object

class fnet.transforms.ToFloat[source]

Bases: object

fnet.transforms.do_nothing(img)[source]
fnet.transforms.flip_x(ar: numpy.ndarray) → numpy.ndarray[source]

Flip array along x axis.

Array dimensions should end in YX.

Parameters

ar – Input array to be flipped.

Returns

Flipped array.

Return type

np.ndarray

fnet.transforms.flip_y(ar: numpy.ndarray) → numpy.ndarray[source]

Flip array along y axis.

Array dimensions should end in YX.

Parameters

ar – Input array to be flipped.

Returns

Flipped array.

Return type

np.ndarray

fnet.transforms.norm_around_center(ar: numpy.ndarray, z_center: Optional[int] = None)[source]

Returns normalized version of input array.

The array will be normalized with respect to the mean, std pixel intensity of the sub-array of length 32 in the z-dimension centered around the array’s “z_center”.

Parameters
  • ar – Input 3d array to be normalized.

  • z_center – Z-index of cell centers.

Returns

Nomralized array, dtype = float32

Return type

np.ndarray

fnet.transforms.normalize(img, per_dim=None)[source]

Subtract mean, set STD to 1.0

Parameters

per_dim – normalize along other axes dimensions not equal to per dim

Module contents

class fnet.FnetLogger(path_csv=None, columns=None)[source]

Bases: object

Log values in a dict of lists.

add(entry)[source]
to_csv(path_csv)[source]