torchtraining.accumulators¶
Accumulate results from iterations
or epochs
Note
IMPORTANT: This module is one of core features so be sure to understand how it works.
Note
IMPORTANT: Accumulators should be applied to iteration
objects. This way those can efficiently accumulate value later passed
to other operations.
Example:
iteration
** tt.Select(predictions=1, labels=2)
** tt.metrics.classification.multiclass.Accuracy()
** tt.accumulators.Mean()
** tt.Split(
tt.callbacks.Log(f"{name} Accuracy"),
tt.callbacks.tensorboard.Scalar(writer, f"{name}/Accuracy"),
)
Code above will accumulate accuracy
from each step and after iteration
ends it will be send to tt.Split
.
Note
IMPORTANT: If users wish to implement their own accumulators
forward
shouldn’t return anything but accumulate data in self.data
variable. No argument calculate
should return self.data
after
calculating accumulated value (e.g. for mean
it would be division
by number of samples).
-
class
torchtraining.accumulators.
Except
(accumulator: torchtraining._base.Accumulator, begin=None, end=None)[source]¶ Special modifier of accumulators accumulating every value except specified.
Note
IMPORTANT: One of the
begin
,end
has to be specified.Note
IMPORTANT: This accumulators is useful in conjunction with
torchtraining.iterations.Multi
(e.g. for GANs and other irregular type of training).User can effectively choose which data coming from step should be accumulated and can divide accumulation based on that.
- Parameters
accumulator (tt.Accumulator) – Instance of accumulator to use for
data
accumulation.begin (int | torch.Tensor[int], optional) – If
int
, it should specify beginning of incoming values stream which will not be taken into accumulation. Iftorch.Tensor
containing integers, it should specify consecutive beginnings of streams which are not taken into account. If left unspecified (None
),begin
is assumed to be `0`th step. Every modulo element of stream matching [begin, end] range will not be forwarded to accumulator.end (int | torch.Tensor[int], optional) – If
int
, it should specify end of incoming values stream which will not be taken into accumulation. Iftorch.Tensor
containing integers, it should specify consecutive ends of stream which will not be taken into account. If left unspecified (None
),end
is assumed to be the same asbegin
. This effectively excludes everybegin
element coming from value stream. Every modulo element of stream matching [begin, end] range will not be forwarded to accumulator.
- Returns
Whatever
accumulator
returns after accumulation. At each step proper value up to this point is returned nonetheless. Usuallytorch.Tensor
orlist
.- Return type
Any
-
class
torchtraining.accumulators.
List
[source]¶ Sum data coming into this object.
Note
IMPORTANT: It is advised NOT TO USE this accumulator due to memory inefficiencies. Prefer
torchtraining.accumulators.Sum
ortorchtraining.accumulators.Mean
instead.List containing data received up to this moment.
data
does not have to implement any concept (as it is only appended tolist
).- Returns
List of values after accumulation. At each step proper
list
up to this point is returned nonetheless.- Return type
-
accumulate
(data) → None[source]¶ Calculate final value.
- Returns
Return
List
with gathered data.- Return type
-
class
torchtraining.accumulators.
Mean
[source]¶ Take mean of the data coming into this object.
data
should have+=
operator implemented between it’s instances and Python integers.Note
IMPORTANT: This is one of memory efficient accumulators and can be safely used. Should be preferred over accumulating data via
torchtraining.accumulators.List
- Returns
Mean of values after accumulation. At each step proper mean up to this point is returned nonetheless.
torch.Tensor
usually, but can be anything implementing concept above.- Return type
torch.Tensor | Any
-
calculate
() → Any[source]¶ Calculate final value.
- Returns
Accumulated data after summation and division by number of samples.
- Return type
-
class
torchtraining.accumulators.
Sum
[source]¶ Sum data coming into this object.
data
should have+=
operator implemented between it’s instances and Python integers.Note
IMPORTANT: This is one of memory efficient accumulators and can be safely used.
- Returns
Sum of values after accumulation. At each step proper summation up to this point is returned nonetheless.
torch.Tensor
usually, but can be anything “summable”.- Return type
torch.Tensor | Any
-
calculate
() → Any[source]¶ Calculate final value.
- Returns
Data accumulated via addition.
- Return type