torchlayers.activations module¶
-
class
torchlayers.activations.HardSigmoid[source]¶ Applies HardSigmoid function element-wise.
Uses
torch.nn.functional.hardtanhinternally with0and1ranges.- Parameters
tensor (torch.Tensor) – Tensor activated element-wise
-
forward(tensor: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
torchlayers.activations.HardSwish[source]¶ Applies HardSwish function element-wise.
While similar in effect to
Swishshould be more CPU-efficient. Above formula proposed by in Andrew Howard et. al in Searching for MobileNetV3.- Parameters
tensor (torch.Tensor) – Tensor activated element-wise
-
forward(tensor: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
torchlayers.activations.Swish(beta: float = 1.0)[source]¶ Applies Swish function element-wise.
This form was originally proposed by Prajit Ramachandran et. al in Searching for Activation Functions
- Parameters
beta (float, optional) – Multiplier used for sigmoid. Default: 1.0 (no multiplier)
-
forward(tensor: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
torchlayers.activations.hard_sigmoid(tensor: torch.Tensor, inplace: bool = False) → torch.Tensor[source]¶ Applies HardSigmoid function element-wise.
See
torchlayers.activations.HardSigmoidfor more details.- Parameters
tensor (torch.Tensor) – Tensor activated element-wise
inplace (bool, optional) – Whether operation should be performed
in-place. Default:False
- Returns
- Return type
-
torchlayers.activations.hard_swish(tensor: torch.Tensor) → torch.Tensor[source]¶ Applies HardSwish function element-wise.
See
torchlayers.activations.HardSwishfor more details.- Parameters
tensor (torch.Tensor) – Tensor activated element-wise
- Returns
- Return type
-
torchlayers.activations.swish(tensor: torch.Tensor, beta: float = 1.0) → torch.Tensor[source]¶ Applies Swish function element-wise.
See
torchlayers.activations.Swishfor more details.- Parameters
tensor (torch.Tensor) – Tensor activated element-wise
beta (float, optional) – Multiplier used for sigmoid. Default: 1.0 (no multiplier)
- Returns
- Return type