torchlayers.activations module¶

class
torchlayers.activations.
HardSigmoid
[source]¶ Applies HardSigmoid function elementwise.
Uses
torch.nn.functional.hardtanh
internally with0
and1
ranges. Parameters
tensor (torch.Tensor) – Tensor activated elementwise

forward
(tensor: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class
torchlayers.activations.
HardSwish
[source]¶ Applies HardSwish function elementwise.
$HardSwish(x) = x * \min(\max(0,x + 3), 6) / 6$While similar in effect to
Swish
should be more CPUefficient. Above formula proposed by in Andrew Howard et. al in Searching for MobileNetV3. Parameters
tensor (torch.Tensor) – Tensor activated elementwise

forward
(tensor: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class
torchlayers.activations.
Swish
(beta: float = 1.0)[source]¶ Applies Swish function elementwise.
$Swish(x) = x / (1 + \exp(beta * x))$This form was originally proposed by Prajit Ramachandran et. al in Searching for Activation Functions
 Parameters
beta (float, optional) – Multiplier used for sigmoid. Default: 1.0 (no multiplier)

forward
(tensor: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchlayers.activations.
hard_sigmoid
(tensor: torch.Tensor, inplace: bool = False) → torch.Tensor[source]¶ Applies HardSigmoid function elementwise.
See
torchlayers.activations.HardSigmoid
for more details. Parameters
tensor (torch.Tensor) – Tensor activated elementwise
inplace (bool, optional) – Whether operation should be performed
inplace
. Default:False
 Returns
 Return type

torchlayers.activations.
hard_swish
(tensor: torch.Tensor) → torch.Tensor[source]¶ Applies HardSwish function elementwise.
See
torchlayers.activations.HardSwish
for more details. Parameters
tensor (torch.Tensor) – Tensor activated elementwise
 Returns
 Return type

torchlayers.activations.
swish
(tensor: torch.Tensor, beta: float = 1.0) → torch.Tensor[source]¶ Applies Swish function elementwise.
See
torchlayers.activations.Swish
for more details. Parameters
tensor (torch.Tensor) – Tensor activated elementwise
beta (float, optional) – Multiplier used for sigmoid. Default: 1.0 (no multiplier)
 Returns
 Return type