callback.early_stopping

class pydgn.training.callback.early_stopping.EarlyStopper(monitor: str, mode: str, checkpoint: bool = False)

Bases: pydgn.training.event.handler.EventHandler

EarlyStopper is the main event handler for optimizers. Just create a subclass that implements an early stopping method.

Parameters
  • monitor (str) – the metric to monitor. The format is [TRAINING|VALIDATION]_[METRIC NAME], where TRAINING and VALIDATION are defined in pydgn.static

  • mode (str) – can be MIN or MAX (as defined in pydgn.static)

  • checkpoint (bool) – whether we are interested in the checkpoint of the “best” epoch or not

on_epoch_end(state: pydgn.training.event.state.State)

At the end of an epoch, check that the validation score improves over the current best validation score. If so, store the necessary info in a dictionary and save it into the “best_epoch_results” property of the state. If it is time to stop, updates the stop_training field of the state.

Parameters

state (State) – object holding training information

stop(state: pydgn.training.event.state.State, score_or_loss: str, metric: str) bool

Returns true when the early stopping technique decides it is time to stop.

Parameters
  • state (State) – object holding training information

  • score_or_loss (str) – whether to monitor scores or losses

  • metric (str) – the metric to consider. The format is [TRAINING|VALIDATION]_[METRIC NAME], where TRAINING and VALIDATION are defined in pydgn.static

Returns

a boolean specifying whether training should be stopped or not

class pydgn.training.callback.early_stopping.PatienceEarlyStopper(monitor, mode, patience=30, checkpoint=False)

Bases: pydgn.training.callback.early_stopping.EarlyStopper

Early Stopper that implements patience

Parameters
  • monitor (str) – the metric to monitor. The format is [TRAINING|VALIDATION]_[METRIC NAME], where TRAINING and VALIDATION are defined in pydgn.static

  • mode (str) – can be MIN or MAX (as defined in pydgn.static)

  • patience (int) – the number of epochs of patience

  • checkpoint (bool) – whether we are interested in the checkpoint of the “best” epoch or not

stop(state, score_or_loss, metric)

Returns true when the number of epochs without improvement is greater than our patience parameter.

Parameters
  • state (State) – object holding training information

  • score_or_loss (str) – whether to monitor scores or losses

  • metric (str) – the metric to consider. The format is [TRAINING|VALIDATION]_[METRIC NAME], where TRAINING and VALIDATION are defined in pydgn.static

Returns

a boolean specifying whether training should be stopped or not

callback.engine_callback

class pydgn.training.callback.engine_callback.EngineCallback(store_last_checkpoint: bool)

Bases: pydgn.training.event.handler.EventHandler

Class responsible for fetching data and handling current-epoch checkpoints

at training time.

Parameters

store_last_checkpoint (bool) – if True, keep the model’s checkpoint for the last training epoch

on_epoch_end(state: pydgn.training.event.state.State)

Stores the checkpoint in a dictionary with the following fields:

  • EPOCH (as defined in pydgn.static)

  • MODEL_STATE (as defined in pydgn.static)

  • OPTIMIZER_STATE (as defined in pydgn.static)

  • SCHEDULER_STATE (as defined in pydgn.static)

  • STOP_TRAINING (as defined in pydgn.static)

Parameters

state (State) – object holding training information

on_fetch_data(state: pydgn.training.event.state.State)

Fetches next batch of data from loader and updates the batch_input field of the state

Parameters

state (State) – object holding training information

on_forward(state: pydgn.training.event.state.State)

Calls the forward method of the model and stores the outputs in the batch_outputs field of the state.

Parameters

state (State) – object holding training information

class pydgn.training.callback.engine_callback.IterableEngineCallback(store_last_checkpoint: bool)

Bases: pydgn.training.callback.engine_callback.EngineCallback

Class that extends pydgn.training.callback.EngineCallback to the processing of Iterable-style datasets. Needs to be used together with the appropriate engine class.

on_fetch_data(state: pydgn.training.event.state.State)

Fetches next batch of data from loader (if any, as data comes from a stream of unknown length) and updates the batch_input field of the state

Parameters

state (State) – object holding training information

class pydgn.training.callback.engine_callback.TemporalEngineCallback(store_last_checkpoint: bool)

Bases: pydgn.training.callback.engine_callback.EngineCallback

Class that extends pydgn.training.callback.EngineCallback to the processing of temporal datasets. Needs to be used together with the appropriate engine class.

on_forward(state)

Calls the forward method of the model and stores the outputs in the batch_outputs field of the state. In addition to the input, passes to the model the hidden state computed at the previous time step.

Parameters

state (State) – object holding training information

callback.gradient_clipping

class pydgn.training.callback.gradient_clipping.GradientClipper(clip_value: float, **kwargs: dict)

Bases: pydgn.training.event.handler.EventHandler

GradientClipper is the main event handler for gradient clippers. Just pass a PyTorch scheduler together with its arguments in the configuration file.

Parameters
  • clip_value (float) – the gradient will be clipped in [-clip_value, clip_value]

  • kwargs (dict) – additional arguments

on_backward(state: pydgn.training.event.state.State)

Clips the gradients of the model before the weights are updated.

Parameters

state (State) – object holding training information

callback.metric

class pydgn.training.callback.metric.AdditiveLoss(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Metric

AdditiveLoss sums an arbitrary number of losses together.

Parameters
  • use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when on_backward() is called). Used by PyDGN, no need to care about this.

  • reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports mean and sum. Default is mean.

  • accumulate_over_epoch (bool) – Whether or not to display the epoch-wise metric rather than an average of per-batch metrics. If true, it keeps a list of predictions and target values across the entire epoch. Use it especially with batch-sensitive metrics, such as micro AP/F1 scores. Default is True.

  • force_cpu (bool) – Whether or not to move all predictions to cpu before computing the epoch-wise loss/score. Default is True.

  • device (bool) – The device used. Default is ‘cpu’.

  • losses_weights – (dict): dictionary of (loss_name, loss_weight) that specifies the weight to apply to each loss to be summed.

  • losses (dict) – dictionary of metrics to add together

_instantiate_loss(loss)

Istantiate a loss with its own arguments (if any are given)

accumulate_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) None

Accumulates predictions and targets of the batch into a list for each loss, so as to compute aggregated statistics at the end of an epoch.

Parameters

state (State) – object holding training information

compute_metric(targets: torch.Tensor, predictions: torch.Tensor) torch.tensor

Sums the value of all different losses into one

Parameters
  • targets (torch.Tensor) – tensor of ground truth values

  • predictions (torch.Tensor) – tensor of predictions of the model

Returns

A tensor with the metric value

forward(targets: torch.Tensor, *outputs: List[torch.Tensor]) dict

For each scorer, it computes a loss and returns them into a dictionary, alongside the sum of all losses.

Parameters

state (State) – object holding training information

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

on_eval_batch_end(state: pydgn.training.event.state.State)

For each loss, computes the average metric in the batch wrt the number of timesteps (default is 1 for static datasets) unless statistics are accumulated over the entire epoch

Parameters

state (State) – object holding training information

on_eval_epoch_end(state: pydgn.training.event.state.State)

Computes an averaged or aggregated loss across the entire epoch, including itself as the main loss. Updates the field epoch_loss in state.

Parameters
  • state (State) –

  • state – object holding training information

on_eval_epoch_start(state: pydgn.training.event.state.State)

Instantiates a dictionary with one list per loss (including itself, representing the sum of all losses)

Parameters

state (State) – object holding training information

on_training_batch_end(state: pydgn.training.event.state.State)

For each loss, computes the average metric in the batch wrt the number of timesteps (default is 1 for static datasets) unless statistics are accumulated over the entire epoch

Parameters

state (State) – object holding training information

on_training_epoch_end(state: pydgn.training.event.state.State)

Computes an averaged or aggregated loss across the entire epoch, including itself as the main loss. Updates the field epoch_loss in state.

Parameters

state (State) – object holding training information

on_training_epoch_start(state: pydgn.training.event.state.State)

Instantiates a dictionary with one list per loss (including itself, representing the sum of all losses)

Parameters

state (State) – object holding training information

class pydgn.training.callback.metric.Classification(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Metric

Generic metric for classification tasks. Used to maximize code reuse for classical metrics.

compute_metric(targets: torch.Tensor, predictions: torch.Tensor) torch.tensor

Applies a classification metric (to be subclassed as it is None in this class)

Parameters
  • targets (torch.Tensor) – tensor of ground truth values

  • predictions (torch.Tensor) – tensor of predictions of the model

Returns

A tensor with the metric value

get_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) Tuple[torch.Tensor, torch.Tensor]

Returns output[0] as predictions and dataset targets. Squeezes the first dimension of output and targets to get single vectors.

Parameters
  • targets (torch.Tensor) – ground truth

  • outputs (List[torch.Tensor]) – outputs of the model

Returns

A tuple of tensors (predicted_values, target_values)

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

Bases: pydgn.training.callback.metric.Metric

Implements a dot product link prediction metric, as defined in https://arxiv.org/abs/1611.07308.

compute_metric(targets: torch.Tensor, predictions: torch.Tensor) torch.tensor

Applies BCEWithLogits to link logits and targets.

Parameters
  • targets (torch.Tensor) – tensor of ground truth values

  • predictions (torch.Tensor) – tensor of predictions of the model

Returns

A tensor with the metric value

get_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) Tuple[torch.Tensor, torch.Tensor]

Uses node embeddings (outputs[1]) aand positive/negative edges (contained in targets by means of e.g., a LinkPredictionSingleGraphDataProvider) to return logits and target labels of an edge classification task.

Parameters
  • targets (torch.Tensor) – ground truth

  • outputs (List[torch.Tensor]) – outputs of the model

Returns

A tuple of tensors (predicted_values, target_values)

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

class pydgn.training.callback.metric.MeanAverageError(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Regression

Wrapper around torch.nn.MSELoss

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

class pydgn.training.callback.metric.MeanSquareError(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Regression

Wrapper around torch.nn.MSELoss

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

class pydgn.training.callback.metric.Metric(*args: Any, **kwargs: Any)

Bases: torch.nn.Module, pydgn.training.event.handler.EventHandler

Metric is the main event handler for all metrics. Other metrics can easily subclass by implementing the forward() method, though sometimes more complex implementations are required.

Parameters
  • use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when on_backward() is called). Used by PyDGN, no need to care about this.

  • reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports mean and sum. Default is mean.

  • accumulate_over_epoch (bool) – Whether or not to display the epoch-wise metric rather than an average of per-batch metrics. If true, it keep a list of predictions and target values across the entire epoch. Use it especially with batch-sensitive metrics, such as micro AP/F1 scores. Default is True.

  • force_cpu (bool) – Whether or not to move all predictions to cpu before computing the epoch-wise loss/score. Default is True.

  • device (bool) – The device used. Default is ‘cpu’.

accumulate_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) None

Used to specify how to accumulate predictions and targets. This can be customized by subclasses like AdditiveLoss and MultiScore to accumulate predictions and targets for different losses/scores.

Parameters
  • targets – target tensor

  • *outputs – outputs of the model

compute_metric(targets: torch.Tensor, predictions: torch.Tensor) torch.tensor

Computes the metric for a given set of targets and predictions

Parameters
  • targets (torch.Tensor) – tensor of ground truth values

  • predictions (torch.Tensor) – tensor of predictions of the model

Returns

A tensor with the metric value

forward(targets: torch.Tensor, *outputs: List[torch.Tensor]) dict

Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.

Parameters
  • targets (torch.Tensor) – ground truth

  • outputs (List[torch.Tensor]) – outputs of the model

  • batch_loss_extra (dict) – dictionary of information computed by metrics used as losses

Returns

A dictionary containing associations metric_name - value

get_main_metric_name() str

Return the metric’s main name. Useful when a metric is the combination of many.

Returns

the metric’s main name

get_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) Tuple[torch.Tensor, torch.Tensor]

Returns predictions and target tensors to be accumulated for a given metric

Parameters
  • targets (torch.Tensor) – ground truth

  • outputs (List[torch.Tensor]) – outputs of the model

Returns

A tuple of tensors (predicted_values, target_values)

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

on_backward(state: pydgn.training.event.state.State)

Calls backward on the loss if the metric is a loss.

Parameters

state (State) – object holding training information

on_compute_metrics(state: pydgn.training.event.state.State)

Computes the loss/score depending on the metric, updating the batch_loss or batch_score field in the state. In temporal graph learning, this method is computed more than once before the batch ends, so we accumulate the loss or scores across timesteps of a single batch.

Parameters

state (State) – object holding training information

on_eval_batch_end(state: pydgn.training.event.state.State)

If we do not computed aggregated metric values over the entire epoch, populate the batch metrics list with the new loss/score. Divide by the number of timesteps in the batch (default is 1 for static datasets)

Parameters

state (State) – object holding training information

on_eval_batch_start(state: pydgn.training.event.state.State)

Initializes the number of potential time steps in a batch (for temporal learning)

Parameters

state (State) – object holding training information

on_eval_epoch_end(state: pydgn.training.event.state.State)

Computes the mean of batch metrics or an aggregated score over all epoch depending on the accumulate_over_epoch parameter. Updates epoch_loss and epoch_score fields in the state and resets the basic fields used.

Parameters

state (State) – object holding training information

on_eval_epoch_start(state: pydgn.training.event.state.State)

Initialize list of batch metrics as well as the list of batch predictions and targets for the metric

Parameters

state (State) – object holding training information

on_training_batch_end(state: pydgn.training.event.state.State)

If we do not computed aggregated metric values over the entire epoch, populate the batch metrics list with the new loss/score. Divide by the number of timesteps in the batch (default is 1 for static datasets)

Parameters

state (State) – object holding training information

on_training_batch_start(state: pydgn.training.event.state.State)

Initializes the number of potential time steps in a batch (for temporal learning)

Parameters

state (State) – object holding training information

on_training_epoch_end(state: pydgn.training.event.state.State)

Computes the mean of batch metrics or an aggregated score over all epoch depending on the accumulate_over_epoch parameter. Updates epoch_loss and epoch_score fields in the state and resets the basic fields used.

Parameters

state (State) – object holding training information

on_training_epoch_start(state: pydgn.training.event.state.State)

Initialize list of batch metrics as well as the list of batch predictions and targets for the metric

Parameters

state (State) – object holding training information

class pydgn.training.callback.metric.MultiScore(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Metric

This class is used to keep track of multiple additional metrics used as scores, rather than losses.

Parameters
  • use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when on_backward() is called). Used by PyDGN, no need to care about this.

  • reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports mean and sum. Default is mean.

  • accumulate_over_epoch (bool) – Whether or not to display the epoch-wise metric rather than an average of per-batch metrics. If true, it keeps a list of predictions and target values across the entire epoch. Use it especially with batch-sensitive metrics, such as micro AP/F1 scores. Default is True.

  • force_cpu (bool) – Whether or not to move all predictions to cpu before computing the epoch-wise loss/score. Default is True.

  • device (bool) – The device used. Default is ‘cpu’.

  • main_scorer (Metric) – the score on which final results are computed.

  • extra_scorers (dict) – dictionary of other metrics to consider.

_istantiate_scorer(scorer)

Istantiate a scorer with its own arguments (if any are given)

accumulate_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) None

Accumulates predictions and targets of the batch into a list for each scorer, so as to compute aggregated statistics at the end of an epoch.

Parameters

state (State) – object holding training information

forward(targets: torch.Tensor, *outputs: List[torch.Tensor]) Union[dict, float]

For each scorer, it computes a score and returns them into a dictionary

Parameters

state (State) – object holding training information

get_main_metric_name()

Returns the name of the first scorer that is passed to this class via the __init__ method.

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

on_eval_batch_end(state: pydgn.training.event.state.State)

For each scorer, computes the average metric in the batch wrt the number of timesteps (default is 1 for static datasets) unless statistics are accumulated over the entire epoch

Parameters

state (State) – object holding training information

on_eval_epoch_end(state: pydgn.training.event.state.State)

For each score, computes the epoch scores using the same logic as the superclass

Parameters

state (State) – object holding training information

on_eval_epoch_start(state: pydgn.training.event.state.State)

Compared to superclass version, initializes a dictionary for each score to track rather than single lists

Args:
state (State):

object holding training information

on_training_batch_end(state: pydgn.training.event.state.State)

For each scorer, computes the average metric in the batch wrt the number of timesteps (default is 1 for static datasets) unless statistics are accumulated over the entire epoch

Parameters

state (State) – object holding training information

on_training_epoch_end(state: pydgn.training.event.state.State)

For each score, computes the epoch scores using the same ogic as the superclass

Parameters

state (State) – object holding training information

on_training_epoch_start(state: pydgn.training.event.state.State)

Compared to superclass version, initializes a dictionary for each score to track rather than single lists

Parameters

state (State) – object holding training information

class pydgn.training.callback.metric.MulticlassAccuracy(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Metric

Implements multiclass classification accuracy.

static _get_correct(output)

Returns the argmax of the output alongside dimension 1.

compute_metric(targets: torch.Tensor, predictions: torch.Tensor) torch.tensor

Takes output[0] as predictions and computes a discrete class using argmax. Returns standard dataset targets as well. Squeezes the first dimension of output and targets to get single vectors.

Parameters
  • targets (torch.Tensor) – tensor of ground truth values

  • predictions (torch.Tensor) – tensor of predictions of the model

Returns

A tensor with the metric value

get_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) Tuple[torch.Tensor, torch.Tensor]

Takes output[0] as predictions and computes a discrete class using argmax. Returns standard dataset targets as well. Squeezes the first dimension of output and targets to get single vectors.

Parameters
  • targets (torch.Tensor) – ground truth

  • outputs (List[torch.Tensor]) – outputs of the model

Returns

A tuple of tensors (predicted_values, target_values)

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

class pydgn.training.callback.metric.MulticlassClassification(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Classification

Wrapper around torch.nn.CrossEntropyLoss

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

class pydgn.training.callback.metric.Regression(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Metric

Generic metric for regression tasks. Used to maximize code reuse for classical metrics.

compute_metric(targets: torch.Tensor, predictions: torch.Tensor) torch.tensor

Applies a regression metric (to be subclassed as it is None in this class)

Parameters
  • targets (torch.Tensor) – tensor of ground truth values

  • predictions (torch.Tensor) – tensor of predictions of the model

Returns

A tensor with the metric value

get_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) Tuple[torch.Tensor, torch.Tensor]

Returns output[0] as predictions and dataset targets. Squeezes the first dimension of output and targets to get single vectors.

Parameters
  • targets (torch.Tensor) – ground truth

  • outputs (List[torch.Tensor]) – outputs of the model

Returns

A tuple of tensors (predicted_values, target_values)

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

class pydgn.training.callback.metric.ToyMetric(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Metric

Implements a toy metric.

static _get_correct(output)

Returns the argmax of the output alongside dimension 1.

compute_metric(targets: torch.Tensor, predictions: torch.Tensor) torch.tensor

Computes a dummy score

Parameters
  • targets (torch.Tensor) – tensor of ground truth values

  • predictions (torch.Tensor) – tensor of predictions of the model

Returns

A tensor with the metric value

get_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) Tuple[torch.Tensor, torch.Tensor]

Returns output[0] and dataset targets

Parameters
  • targets (torch.Tensor) – ground truth

  • outputs (List[torch.Tensor]) – outputs of the model

Returns

A tuple of tensors (predicted_values, target_values)

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

class pydgn.training.callback.metric.ToyUnsupervisedMetric(*args: Any, **kwargs: Any)

Bases: pydgn.training.callback.metric.Metric

Implements a toy metric.

compute_metric(targets: torch.Tensor, predictions: torch.Tensor) torch.tensor

Computes a dummy score

Parameters
  • targets (torch.Tensor) – tensor of ground truth values

  • predictions (torch.Tensor) – tensor of predictions of the model

Returns

A tensor with the metric value

get_predictions_and_targets(targets: torch.Tensor, *outputs: List[torch.Tensor]) Tuple[torch.Tensor, torch.Tensor]

Returns output[0] and dataset targets

Parameters
  • targets (torch.Tensor) – ground truth

  • outputs (List[torch.Tensor]) – outputs of the model

Returns

A tuple of tensors (predicted_values, target_values)

property name: str

The name of the loss to be used in configuration files and displayed on Tensorboard

callback.optimizer

class pydgn.training.callback.optimizer.Optimizer(model: pydgn.model.interface.ModelInterface, optimizer_class_name: str, accumulate_gradients: bool = False, **kwargs: dict)

Bases: pydgn.training.event.handler.EventHandler

Optimizer is the main event handler for optimizers. Just pass a PyTorch optimizer together with its arguments in the configuration file.

Parameters
  • model (ModelInterface) – the model that has to be trained

  • optimizer_class_name (str) – dotted path to the optimizer class to use

  • accumulate_gradients (bool) – if True, accumulate mini-batch gradients to perform a batch gradient update without loading the entire batch in memory

  • kwargs (dict) – additional parameters for the specific optimizer

load_state_dict(state_dict)

Loads the state_dict of the optimizer from a checkpoint

Parameters

state (State) – object holding training information

on_epoch_end(state)

Updates the state of the optimizer into the state at the end of the epoch

Parameters

state (State) – object holding training information

on_fit_start(state)

If a checkpoint is present, load the state of the optimizer

Parameters

state (State) – object holding training information

on_training_batch_end(state)

At the end of a batch, if batch updates are in order, performs a weight update

Parameters

state (State) – object holding training information

on_training_batch_start(state)

At the start of a batch, if batch updates are in order, zeroes the gradient of the optimizer

Parameters

state (State) – object holding training information

on_training_epoch_end(state)

At the end of a batch, and if the gradient has been accumulated across the entire epoch, performs a weight update

Parameters

state (State) – object holding training information

on_training_epoch_start(state)

At the start of epoch, and if the gradient has been accumulated across the entire epoch, zeroes the gradient of the optimizer.

Parameters

state (State) – object holding training information

callback.plotter

class pydgn.training.callback.plotter.Plotter(exp_path: str, store_on_disk: bool = False, **kwargs: dict)

Bases: pydgn.training.event.handler.EventHandler

Plotter is the main event handler for plotting at training time.

Parameters
  • exp_path (str) – path where to store the Tensorboard logs

  • store_on_disk (bool) – whether to store all metrics on disk. Defaults to False

  • kwargs (dict) – additional arguments that may depend on the plotter

on_epoch_end(state: pydgn.training.event.state.State)

Writes Training, Validation and (if any) Test metrics to Tensorboard

Parameters

state (State) – object holding training information

on_fit_end(state: pydgn.training.event.state.State)

Frees resources by closing the Tensorboard writer

Parameters

state (State) – object holding training information

class pydgn.training.callback.plotter.WandbPlotter(exp_path: str, wandb_project, wandb_entity, **kwargs: dict)

Bases: pydgn.training.event.handler.EventHandler

EventHandler subclass for logging to Weights & Biases

Parameters
  • wandb_project (str) – Project Name for W&B

  • wandb_entity (str) – Entity Name for W&B

  • kwargs (dict) – additional arguments that may depend on the plotter

on_epoch_end(state: pydgn.training.event.state.State)

Writes Training, Validation and (if any) Test metrics to WandB

Parameters

state (State) – object holding training information

on_fit_end(state: pydgn.training.event.state.State)

Frees resources by closing the WandB writer

Parameters

state (State) – object holding training information

callback.scheduler

class pydgn.training.callback.scheduler.EpochScheduler(scheduler_class_name: str, optimizer: torch.optim.optimizer.Optimizer, **kwargs: dict)

Bases: pydgn.training.callback.scheduler.Scheduler

Implements a scheduler which uses epochs to modify the step size

on_training_epoch_end(state: pydgn.training.event.state.State)

Performs a scheduler’s step at the end of the training epoch.

Parameters

state (State) – object holding training information

class pydgn.training.callback.scheduler.MetricScheduler(scheduler_class_name: str, use_loss: bool, monitor: str, optimizer: torch.optim.optimizer.Optimizer, **kwargs: dict)

Bases: pydgn.training.callback.scheduler.Scheduler

Implements a scheduler which uses variations in the metric of interest to modify the step size

Parameters
  • scheduler_class_name (str) – dotted path to class name of the scheduler

  • use_loss (str) – whether to monitor scores or losses

  • monitor (str) – the metric to monitor. The format is [TRAINING|VALIDATION]_[METRIC NAME], where TRAINING and VALIDATION are defined in pydgn.static

  • optimizer (torch.optim.optimizer) – the Pytorch optimizer to use. This is automatically recovered by PyDGN when providing an optimizer

  • kwargs – additional parameters for the specific scheduler to be used

on_epoch_end(state: pydgn.training.event.state.State)

Updates the state of the scheduler according to a metric to monitor at each epoch. Finally, loads the scheduler state if already present in the state_dict of a checkpoint

Parameters

state (State) – object holding training information

class pydgn.training.callback.scheduler.Scheduler(scheduler_class_name: str, optimizer: torch.optim.optimizer.Optimizer, **kwargs: dict)

Bases: pydgn.training.event.handler.EventHandler

Scheduler is the main event handler for schedulers. Just pass a PyTorch scheduler together with its arguments in the configuration file.

Parameters
  • scheduler_class_name (str) – dotted path to class name of the scheduler

  • optimizer (torch.optim.optimizer) – the Pytorch optimizer to use. This is automatically recovered by PyDGN when providing an optimizer

  • kwargs – additional parameters for the specific scheduler to be used

on_epoch_end(state: pydgn.training.event.state.State)

Updates the scheduler state with the current one for checkpointing

Parameters

state (State) – object holding training information

on_fit_start(state: pydgn.training.event.state.State)

Loads the scheduler state if already present in the state_dict of a checkpoint

Parameters

state (State) – object holding training information