torchtraining¶
Classes¶
Root module of torchtraining
containing common operations.
Note
IMPORTANT: This module is one of the most important and is used in almost any DL task so be sure to understand it!
Operations in this module can be used to:
control pipeline flow
select output from `step`s
send data to multiple operations
See below for more info.
-
class
torchtraining.
Flatten
(types: Tuple = (<class 'list'>, <class 'tuple'>))[source]¶ Flatten arbitrarily nested data.
Example:
class TrainStep(tt.steps.Train): def forward(self, module, sample): module1, module2 = module ... return ((logits1, targets1), (logits2, targets2), module1, module2) step = TrainStep(criterion, device) # Tuple (logits1, targets1, logits2, targets2, module1, module2) step ** tt.Flatten()
- Parameters
types (Tuple[type], optional) – Types to be considered non-flat. Those will be recursively flattened. Default:
(list, tuple)
- Returns
- Return type
Tuple[samples]
-
class
torchtraining.
If
(condition: Union[bool, Callable[Any, bool]], operation: Callable[Any, Any])[source]¶ Run operation only If
condition
isTrue
.condition
can also be a single argument callable, in this case it can be dependent on data, see below:class TrainStep(tt.steps.Train): def forward(self, module, sample): ... return loss step = TrainStep(criterion, device) step ** tt.If(lambda loss: loss ** 10, tt.callbacks.Logger("VERY HIGH LOSS!!!"))
- Parameters
- Returns
If
true
, returns value fromoperation
, otherwise passes originaldata
- Return type
Any
-
forward
(data: Any) → Any[source]¶ - Parameters
data (Any) – Anything you want (usually
torch.Tensor
like stuff).
-
class
torchtraining.
IfElse
(condition: bool, operation1: Callable[Any, Any], operation2: Callable[Any, Any])[source]¶ Run
operation1
only ifcondition
isTrue
, otherwise runoperation2
.condition
can also be a single argument callable, in this case it can be dependent on data, see below:class TrainStep(tt.steps.Train): def forward(self, module, sample): ... return loss step = TrainStep(criterion, device) step ** tt.IfElse( lambda loss: loss ** 10, tt.callbacks.Logger("VERY HIGH LOSS!!!"), tt.callbacks.Logger("LOSS IS NOT THAT HIGH..."), )
- Parameters
condition (bool) – Boolean value. If
true
, run underlying Op (or other Callable).operation1 (torchtraining.Op | Callable) – Operation or callable getting single argument (
data
) and returning anything.operation2 (torchtraining.Op | Callable) – Operation or callable getting single argument (
data
) and returning anything.
- Returns
If
true
, returns value fromoperation1
, otherwise fromoperation2
.- Return type
Any
-
forward
(data: Any) → Any[source]¶ - Parameters
data (Any) – Anything you want (usually
torch.Tensor
like stuff).
-
class
torchtraining.
Lambda
(operation: Callable[Any, Any], name: str = 'torchtraining.Lambda')[source]¶ Run user specified operation on
data
.Example:
class TrainStep(tt.steps.Train): def forward(self, module, sample): ... return accuracy step = TrainStep(criterion, device) # If you want to get that SOTA badly, we got ya covered step ** tt.Lambda(lambda accuracy: accuracy * 2)
- Parameters
- Returns
Value returned from
operation
- Return type
Any
-
forward
(data: Any) → Any[source]¶ - Parameters
data (Any) – Anything you want (usually
torch.Tensor
like stuff).
-
class
torchtraining.
OnSplittedTensor
(operation: torchtraining._base.Operation, dim: int = 0)[source]¶ Split tensor along dimension and apply operation on each element.
By default,
torch.Tensor
will be splitted along batch (dim=0
)Note
IMPORTANT: After splitting first dimension is squeezed via
torch.squeeze
.Example:
class TrainStep(tt.steps.Train): def forward(self, module, sample): # Dummy step images, labels return images step = TrainStep(criterion, device) # Assume summary_writer is instance of torch.utils.tensorboard.SummaryWriter() step ** tt.OnSplittedTensor(tt.callbacks.tensorboard.Image(summary_writer)) # Each image will be saved separately
- Parameters
operation (tt.Operation | Callable(data) -> Any) – Operation which will be applied to each element of
torch.Tensor
.dim (int, optional) – Dimension along which
data
torch.Tensor
will be splitted. Default:0
- Returns
Splitted
data
alongdim
(unmodified byoperation
).- Return type
Tuple[torch.Tensor]
-
forward
(data)[source]¶ - Parameters
data (torch.Tensor) – Tensor to split and apply
operation
on.
-
class
torchtraining.
Select
(**output_selection: int)[source]¶ Select output item(s) returned from
step
oriteration
objects.Allows users to focus on specific part of output and pipe specified values to other operations (like
metrics
,loggers
etc.).Note
IMPORTANT: This operation is run in almost any case so be sure to understand how it works.
Example:
class TrainStep(tt.steps.Train): def forward(self, module, sample): # Generate loss and other necessary items ... return loss, predictions, targets step = TrainStep(criterion, device) # Select `loss` and perform backpropagation # Only single value will be forward to backward from # (loss, predictions, targets) tuple step ** tt.Select(loss=0) ** tt.pytorch.Backward()
Note
Name of keyword argument can be arbitrary but you are really encouraged to name it like the variable returned from
step
(or at least make it understandable to others).- Parameters
output_selection (**output_selection) –
name
: output_index mapping selecting which element from step returnedtuple
to choose.name
can be arbitrary, but should be named like the variable returned fromstep
. See example above.- Returns
If single int is passed
output_selection
return single element fromIterable
. Otherwise returns chosen elements aslist
- Return type
Any | List[Any]
-
class
torchtraining.
Split
(*operations: Callable[Any, Any], return_modified: bool = False)[source]¶ Split operation with data to multiple components.
Note
IMPORTANT: This operation is run in almost any case so be sure to understand how it works.
Useful when users wish to use calculated result in multiple places. Example calculating metrics and logging them in multiple places:
class TrainStep(tt.steps.Train): def forward(self, module, sample): # Generate loss and other necessary items ... # Assume binary classification return loss, logits, targets step = TrainStep(criterion, device) # Push (logits, targets) to Precision and Recall # and log those values after calculating metrics step ** tt.Select(logits=1, targets=2) ** tt.Split( tt.metrics.classification.binary.Precision() ** tt.callbacks.Logger("Precision"), tt.metrics.classification.binary.Recall() ** tt.callbacks.Logger("Recall"), )
- Parameters
- Returns
Returns
data
passed originally orlist
containing modified data returned fromoperations
.- Return type
data | List[modified data]
-
class
torchtraining.
ToAll
(operation: Callable)[source]¶ Apply operation to each element of sample.**
Note
If you want to apply operation to all nested elements (e.g. in nested
tuple
), please usetorchtraining.Flatten
object first.Example:
class TrainStep(tt.steps.Train): def forward(self, module, sample): ... return loss step = TrainStep(criterion, device) step ** tt.If( lambda loss: loss ** 10, tt.callbacks.Logger("VERY HIGH LOSS!!!"), tt.callbacks.Logger("LOSS IS NOT THAT HIGH..."), )
- Parameters
operation (Callable) – Pipe to apply to each element of sample.
- Returns
Tuple consisting of subsamples with operation applied.
- Return type
Tuple[operation(subsample)]
-
forward
(sample)[source]¶ - Parameters
data (Any) – Anything you want (usually
torch.Tensor
like stuff).