deepdow.callbacks module¶
Collection of different callbacks.
- class BenchmarkCallback(lookbacks=None)[source]¶
Bases:
Callback
Computation of benchmarks performance over different metrics and dataloaders.
- Parameters:
lookbacks (list or None) – If
list
then list of integers representing the different lookbacks. The benchmarks will be run for all of them. If None then just the default one implied by the dataloader.
- run¶
Run instance that is using this callback.
- Type:
Notes
Very useful for establishing baselines for deep learning models.
- class Callback[source]¶
Bases:
object
Parent class for all callbacks.
General construct that allows for taking different actions at different points of the training process. One can provide a list of callbacks to the
deepdow.experiments.Run
.Notes
To implement new callbacks one needs to subclass this class.
- on_batch_begin(metadata)[source]¶
Take actions at the beginning of a batch.
- Parameters:
metadata (dict) – Dictionary that is going to be populated with relevant data within Run.launch. Keys available are ‘asset_names’, ‘batch’, ‘epoch’, ‘timestamps’, ‘X_batch’, ‘y_batch’.
- on_batch_end(metadata)[source]¶
Take actions at the beginning of a batch.
- Parameters:
metadata (dict) – Dictionary that is going to be populated with relevant data within Run.launch. Keys available are ‘asset_names’, ‘batch’, ‘batch_loss’, ‘epoch’, ‘timestamps’, ‘weights’, ‘X_batch’, ‘y_batch’.
- on_epoch_begin(metadata)[source]¶
Take actions at the beginning of an epoch.
- Parameters:
metadata (dict) – Dictionary that is going to be populated with relevant data within Run.launch. Keys available are ‘epoch’.
- on_epoch_end(metadata)[source]¶
Take actions at the beginning of an epoch.
- Parameters:
metadata (dict) – Dictionary that is going to be populated with relevant data within Run.launch. Keys available are epoch, n_epochs.
- on_train_begin(metadata)[source]¶
Take actions at the beginning of the training.
- Parameters:
metadata (dict) – Dictionary that is going to be populated with relevant data within Run.launch.
- class EarlyStoppingCallback(dataloader_name, metric_name, patience=5)[source]¶
Bases:
Callback
Early stopping callback.
In the background, we keep a running minimum of a metric of interest. If it does not change for more than patience epochs the training is stopped.
- Parameters:
dataloader_name (str) – Name of the dataloader, needs to correspond to a key in val_dataloaders in
deepdow.experiments.Run
.metric_name (str) – Name of the metric to use (the lower the better), needs to correspond to a key in metrics in
deepdow.experiments.Run
.patience (int) – Number of epochs without improvement before the training is stopped.
- min¶
Running minimum of the metric.
- Type:
float
- n_epochs_no_improvement¶
Number of epochs without improvement - not going below the previous minimum.
- Type:
int
- exception EarlyStoppingException[source]¶
Bases:
Exception
Custom exception raised by EarlyStoppingCallback to stop the training.
- class MLFlowCallback(run_name=None, mlflow_path=None, experiment_name=None, run_id=None, log_benchmarks=False)[source]¶
Bases:
Callback
MLFlow logging callback.
- Parameters:
run_name (str or None) – If
str
then represents the name of a new run to be created. If None then the user eithers provides run_id of an existing run and everything will be logged into it or a new run with random name would be generated.mlflow_path (str or pathlib.Path or None) – If
str
orpathlib.Path
then represents the absolute path to a folder in which mlruns lie. If None then home folder used.experiment_name (str or None) – Experiment to be use. If None using the default one.
run_id (str or None) – If provided and run_name is None then continuing an existing run. If None than a new run is created.
log_benchmarks (bool) – If True then all benchmarks will be logged under separate mlflow runs.
- run¶
Run instance that is using this callback.
- Type:
- class ModelCheckpointCallback(folder_path, dataloader_name, metric_name, verbose=False)[source]¶
Bases:
Callback
Model checkpointing callback.
In the background, we keep a running minimum of a metric of interest.
- Parameters:
folder_path (str or pathlib.Path) – Directory to which to save the checkpoints.
dataloader_name (str) – Name of the dataloader, needs to correspond to a key in val_dataloaders in
deepdow.experiments.Run
.metric_name (str) – Name of the metric to use (the lower the better), needs to correspond to a key in metrics in
deepdow.experiments.Run
.verbose (bool) – If True, each checkpointing triggers a message.
- min¶
Running minimum of the metric.
- Type:
float
- class ProgressBarCallback(output='stderr', n_decimals=3)[source]¶
Bases:
Callback
Progress bar reporting remaining steps and relevant metrics.
- bar¶
Bar object that is going to be instantiated at the beginning of each epoch.
- Type:
tqdm.tqdm
- metrics¶
Keys are equal to self.run.metrics.keys() and the values are list that are appended on batch end with after gradient step metrics.
- Type:
dict
- run¶
Run object that is running the main training loop. One can get access to multiple useful things like the network (run.network), train dataloader (run.train_dataloader) etc.
- Type:
- output¶
Where to output the progress bar.
- Type:
str, {‘stdout’, ‘stderr’}
- static create_custom_postfix_str(metrics, n_decimals=5)[source]¶
Create a custom string with metrics.
- Parameters:
metrics (dict) – Keys represent names and the
n_decimals (int) – Number of decimals to display.
- Returns:
formatted – Nicely formatted string to be appended to the progress bar.
- Return type:
str
- class TensorBoardCallback(log_dir=None, ts=None, log_benchmarks=False)[source]¶
Bases:
Callback
Tensorboard logging interface.
- Currently supports:
images (evolution of predicted weights over time)
histograms (activations of input and outputs of all layers)
scalars (logged metrics)
- Parameters:
log_dir (None or str or pathlib.Path) – Represent the folder where to checkpoints will be saved. If None then using the current working directory. Else the exact path.
ts (datetime.datetime or None) – If
datetime.datetime
, then only logging specific sample corresponding to provided timestamp. If None then logging every sample.log_benchmarks (bool) – If True, then benchmark metrics are logged to scalars. The folder is log_dir / bm_name.
- run¶
Run instance that is using this callback.
- Type:
- class ValidationCallback(freq=1, lookbacks=None)[source]¶
Bases:
Callback
Logging of all metrics for all validation dataloaders.
- Parameters:
freq (int) – With what frequiency to compute metrics. If equal to 1 then every epoch. The higher the less frequent the logging will be.
lookbacks (list or None) – If
list
then list of integers representing the different lookbacks. The benchmarks will be run for all of them. If None then just the default one implied by the dataloader.
- run¶
Run instance that is using this callback.
- Type: