torchfunc¶
-
class
torchfunc.
Timer
(function: Callable = <built-in function perf_counter>)[source]¶ Measure execution time of function.
Can be used as context manager or function decorator, perform checkpoints or display absolute time from measurements beginning.
Used as context manager:
with Timer() as timer: ... # your operations print(timer) # __str__ calls timer.time() internally timer.checkpoint() # register checkpoint ... # more operations print(timer.checkpoint()) # time since last timer.checkpoint() call ... # even more operations print(timer) # time taken for the block, will not be updated outside of it
When execution leaves the block, timer will be blocked. Last checkpoint and time taken to execute whole block will be returned by
checkpoint()
andtime()
methods respectively.Used as function decorator:
@Timer() def foo(): return 42 value, time = foo()
- Parameters
function (Callable, optional) – No argument function used to measure time. Default: time.perf_counter
-
torchfunc.
info
(general: bool = True, cuda: bool = True) → str[source]¶ Return host related info as string.
This function may help you tailor your module’s architecture to specific environment it will be run on.
For in-depth info regarding possible performance improvements see
torchfunc.performance
submodule.Information is divided into two sections:
general - related to OS, Python version etc.
cuda - specific to CUDA hardware
Example:
print(torchfunc.info(general=False))
- Parameters
general (bool, optional) – Return general informations. Default:
True
cuda (bool, optional) – Return CUDA related information. Default:
True
- Returns
Description of system and/or GPU.
- Return type
str
-
torchfunc.
installed
(module: str) → bool[source]¶ Return True if module is installed.
Example:
# Check whether mixed precision library available print(torchfunc.installed("apex"))
- Parameters
module (str) – Name of the module to be checked.
- Returns
True if installed.
- Return type
bool
-
class
torchfunc.
seed
(value, cuda: bool = False)[source]¶ Seed PyTorch and numpy.
This code is based on PyTorch’s reproducibility guide: https://pytorch.org/docs/stable/notes/randomness.html Can be used as standard seeding procedure, context manager (seed will be changed only within block) or function decorator.
Standard seed:
torchfunc.Seed(0) # no surprises I guess
Used as context manager:
with Seed(1): ... # your operations print(torch.initial_seed()) # Should be back to seed pre block
Used as function decorator:
@Seed(1) # Seed only within function def foo(): return 42
Important: It’s impossible to put original
numpy
seed after context manager or decorator, hence it will be set to original PyTorch’s seed.- Parameters
value (int) – Seed value used in np.random_seed and torch.manual_seed. Usually int is provided
cuda (bool, optional) – Whether to set PyTorch’s cuda backend into deterministic mode (setting cudnn.benchmark to
False
and cudnn.deterministic toTrue
). IfFalse
, consecutive runs may be slightly different. IfTrue
, automatic autotuning for convolutions layers with consistent input shape will be turned off. Default:False
-
torchfunc.
sizeof
(obj) → int[source]¶ Get size in bytes of Tensor, torch.nn.Module or standard object.
Specific routines are defined for torch.tensor objects and torch.nn.Module objects. They will calculate how much memory in bytes those object consume.
If another object is passed,
sys.getsizeof
will be called on it.This function works similarly to C++’s sizeof operator.
Example:
module = torch.nn.Linear(20, 20) bias = 20 * 4 # in bytes weights = 20 * 20 * 4 # in bytes print(torchfunc.sizeof(model) == bias + weights) # True
- Parameters
obj – Object whose size will be measured.
- Returns
Size in bytes of the object
- Return type
int