Shortcuts

torchlayers.activations module

class torchlayers.activations.HardSigmoid[source]

Applies HardSigmoid function element-wise.

Uses torch.nn.functional.hardtanh internally with 0 and 1 ranges.

Parameters

tensor (torch.Tensor) – Tensor activated element-wise

forward(tensor: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchlayers.activations.HardSwish[source]

Applies HardSwish function element-wise.

HardSwish(x)=xmin(max(0,x+3),6)/6HardSwish(x) = x * \min(\max(0,x + 3), 6) / 6

While similar in effect to Swish should be more CPU-efficient. Above formula proposed by in Andrew Howard et. al in Searching for MobileNetV3.

Parameters

tensor (torch.Tensor) – Tensor activated element-wise

forward(tensor: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchlayers.activations.Swish(beta: float = 1.0)[source]

Applies Swish function element-wise.

Swish(x)=x/(1+exp(betax))Swish(x) = x / (1 + \exp(-beta * x))

This form was originally proposed by Prajit Ramachandran et. al in Searching for Activation Functions

Parameters

beta (float, optional) – Multiplier used for sigmoid. Default: 1.0 (no multiplier)

forward(tensor: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchlayers.activations.hard_sigmoid(tensor: torch.Tensor, inplace: bool = False) → torch.Tensor[source]

Applies HardSigmoid function element-wise.

See torchlayers.activations.HardSigmoid for more details.

Parameters
  • tensor (torch.Tensor) – Tensor activated element-wise

  • inplace (bool, optional) – Whether operation should be performed in-place. Default: False

Returns

Return type

torch.Tensor

torchlayers.activations.hard_swish(tensor: torch.Tensor) → torch.Tensor[source]

Applies HardSwish function element-wise.

See torchlayers.activations.HardSwish for more details.

Parameters

tensor (torch.Tensor) – Tensor activated element-wise

Returns

Return type

torch.Tensor

torchlayers.activations.swish(tensor: torch.Tensor, beta: float = 1.0) → torch.Tensor[source]

Applies Swish function element-wise.

See torchlayers.activations.Swish for more details.

Parameters
  • tensor (torch.Tensor) – Tensor activated element-wise

  • beta (float, optional) – Multiplier used for sigmoid. Default: 1.0 (no multiplier)

Returns

Return type

torch.Tensor