cyto_dl.nn.res_unit module#
- class cyto_dl.nn.res_unit.ResidualUnit(spatial_dims: int, in_channels: int, out_channels: int, strides: Sequence[int] | int = 1, kernel_size: Sequence[int] | int = 3, subunits: int = 2, adn_ordering: str = 'NDA', act: tuple | str | None = 'PRELU', norm: tuple | str | None = 'INSTANCE', dropout: tuple | str | float | None = None, dropout_dim: int | None = 1, dilation: Sequence[int] | int = 1, bias: bool = True, last_conv_only: bool = False, padding: Sequence[int] | int | None = None)[source]#
Bases:
Module
Residual module with multiple convolutions and a residual connection.
For example:
from monai.networks.blocks import ResidualUnit convs = ResidualUnit( spatial_dims=3, in_channels=1, out_channels=1, adn_ordering="AN", act=("prelu", {"init": 0.2}), norm=("layer", {"normalized_shape": (10, 10, 10)}), ) print(convs)
output:
ResidualUnit( (conv): Sequential( (unit0): Convolution( (conv): Conv3d(1, 1, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (adn): ADN( (A): PReLU(num_parameters=1) (N): LayerNorm((10, 10, 10), eps=1e-05, elementwise_affine=True) ) ) (unit1): Convolution( (conv): Conv3d(1, 1, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)) (adn): ADN( (A): PReLU(num_parameters=1) (N): LayerNorm((10, 10, 10), eps=1e-05, elementwise_affine=True) ) ) ) (residual): Identity() )
- Args:
spatial_dims: number of spatial dimensions. in_channels: number of input channels. out_channels: number of output channels. strides: convolution stride. Defaults to 1. kernel_size: convolution kernel size. Defaults to 3. subunits: number of convolutions. Defaults to 2. adn_ordering: a string representing the ordering of activation, normalization, and dropout.
Defaults to “NDA”.
act: activation type and arguments. Defaults to PReLU. norm: feature normalization type and arguments. Defaults to instance norm. dropout: dropout ratio. Defaults to no dropout. dropout_dim: determine the dimensions of dropout. Defaults to 1.
When dropout_dim = 1, randomly zeroes some of the elements for each channel.
When dropout_dim = 2, Randomly zero out entire channels (a channel is a 2D feature map).
When dropout_dim = 3, Randomly zero out entire channels (a channel is a 3D feature map).
The value of dropout_dim should be no larger than the value of dimensions.
dilation: dilation rate. Defaults to 1. bias: whether to have a bias term. Defaults to True. last_conv_only: for the last subunit, whether to use the convolutional layer only.
Defaults to False.
- padding: controls the amount of implicit zero-paddings on both sides for padding number of points
for each dimension. Defaults to None.
See also:
monai.networks.blocks.Convolution