Shortcuts

torchlayers.regularization module

class torchlayers.regularization.Dropout(p=0.5, inplace=False)[source]

Randomly zero out some of the tensor elements.

Based on input shape it either creates 2D or 3D version of dropout for inputs of shape 4D, 5D respectively (including batch as first dimension). For every other dimension, standard torch.nn.Dropout will be used.

Parameters
  • p (float, optional) – Probability of an element to be zeroed. Default: 0.5

  • inplace (bool, optional) – If True, will do this operation in-place. Default: False

class torchlayers.regularization.StandardNormalNoise[source]

Add noise from standard normal distribution during forward pass.

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchlayers.regularization.StochasticDepth(module: torch.nn.modules.module.Module, p: float = 0.5)[source]

Randomly skip module during training with specified p, leaving inference untouched.

Originally proposed by Gao Huang et. al in Deep Networks with Stochastic Depth.

Originally devised as regularization, though other research suggests:

  • “[…] StochasticDepth Nets are less tuned for low-level feature extraction but more tuned for higher level feature differentiation.”

  • “[…] Stochasticity does not help with the ”dead neurons” problem; in fact the problem is actually more pronounced in the early layers. Nonetheless, the Stochastic Depth Network has relatively fewer dead neurons in later layers.”

It might be useful to employ this technique to layers closer to the bottleneck.

Parameters
  • module (torch.nn.Module) – Any module whose output might be skipped (output shape of it has to be equal to the shape of inputs).

  • p (float, optional) – Probability of survival (e.g. the layer will be kept). Default: 0.5

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.