• >
  • torchtraining.accumulators
Shortcuts

torchtraining.accumulators

Accumulate results from iterations or epochs

Note

IMPORTANT: This module is one of core features so be sure to understand how it works.

Note

IMPORTANT: Accumulators should be applied to iteration objects. This way those can efficiently accumulate value later passed to other operations.

Example:

iteration
** tt.Select(predictions=1, labels=2)
** tt.metrics.classification.multiclass.Accuracy()
** tt.accumulators.Mean()
** tt.Split(
    tt.callbacks.Log(f"{name} Accuracy"),
    tt.callbacks.tensorboard.Scalar(writer, f"{name}/Accuracy"),
)

Code above will accumulate accuracy from each step and after iteration ends it will be send to tt.Split.

Note

IMPORTANT: If users wish to implement their own accumulators forward shouldn’t return anything but accumulate data in self.data variable. No argument calculate should return self.data after calculating accumulated value (e.g. for mean it would be division by number of samples).

class torchtraining.accumulators.Except(accumulator: torchtraining._base.Accumulator, begin=None, end=None)[source]

Special modifier of accumulators accumulating every value except specified.

Note

IMPORTANT: One of the begin, end has to be specified.

Note

IMPORTANT: This accumulators is useful in conjunction with torchtraining.iterations.Multi (e.g. for GANs and other irregular type of training).

User can effectively choose which data coming from step should be accumulated and can divide accumulation based on that.

Parameters
  • accumulator (tt.Accumulator) – Instance of accumulator to use for data accumulation.

  • begin (int | torch.Tensor[int], optional) – If int, it should specify beginning of incoming values stream which will not be taken into accumulation. If torch.Tensor containing integers, it should specify consecutive beginnings of streams which are not taken into account. If left unspecified (None), begin is assumed to be `0`th step. Every modulo element of stream matching [begin, end] range will not be forwarded to accumulator.

  • end (int | torch.Tensor[int], optional) – If int, it should specify end of incoming values stream which will not be taken into accumulation. If torch.Tensor containing integers, it should specify consecutive ends of stream which will not be taken into account. If left unspecified (None), end is assumed to be the same as begin. This effectively excludes every begin element coming from value stream. Every modulo element of stream matching [begin, end] range will not be forwarded to accumulator.

Returns

Whatever accumulator returns after accumulation. At each step proper value up to this point is returned nonetheless. Usually torch.Tensor or list.

Return type

Any

calculate() → Any[source]

Calculate final value.

Returns

Returns anything accumulator accumulated.

Return type

Any

forward(data)None[source]
Parameters

data (Any) – Anything which accumulator can consume

reset()None[source]

Reset internal accumulator.

class torchtraining.accumulators.List[source]

Sum data coming into this object.

Note

IMPORTANT: It is advised NOT TO USE this accumulator due to memory inefficiencies. Prefer torchtraining.accumulators.Sum or torchtraining.accumulators.Mean instead.

List containing data received up to this moment. data does not have to implement any concept (as it is only appended to list).

Returns

List of values after accumulation. At each step proper list up to this point is returned nonetheless.

Return type

List

accumulate(data)None[source]

Calculate final value.

Returns

Return List with gathered data.

Return type

torch.Tensor

forward()List[Any][source]
Parameters

data (Any) – Anything which can be added to list. So anything I guess

reset()None[source]

Assign empty list to self.data clearing `saver

class torchtraining.accumulators.Mean[source]

Take mean of the data coming into this object.

data should have += operator implemented between it’s instances and Python integers.

Note

IMPORTANT: This is one of memory efficient accumulators and can be safely used. Should be preferred over accumulating data via torchtraining.accumulators.List

Returns

Mean of values after accumulation. At each step proper mean up to this point is returned nonetheless. torch.Tensor usually, but can be anything implementing concept above.

Return type

torch.Tensor | Any

calculate() → Any[source]

Calculate final value.

Returns

Accumulated data after summation and division by number of samples.

Return type

torch.Tensor

forward(data: Any)None[source]
Parameters

data (Any) – Anything which has __iadd__/__add__ operator implemented between it’s instances and Python integers. It should also have __div__ operator implemented for proper mean calculation.

reset()None[source]

Assign 0 to self.data and zero out counter clearing saver

class torchtraining.accumulators.Sum[source]

Sum data coming into this object.

data should have += operator implemented between it’s instances and Python integers.

Note

IMPORTANT: This is one of memory efficient accumulators and can be safely used.

Returns

Sum of values after accumulation. At each step proper summation up to this point is returned nonetheless. torch.Tensor usually, but can be anything “summable”.

Return type

torch.Tensor | Any

calculate() → Any[source]

Calculate final value.

Returns

Data accumulated via addition.

Return type

torch.Tensor

forward(data)None[source]
Parameters

data (Any) – Anything which has __iadd__/__add__ operator implemented between it’s instances and Python integers.

reset()None[source]

Assign 0 to self.data clearing saver.