CerebNet.utils.lr_scheduler

class CerebNet.utils.lr_scheduler.CosineAnnealingWarmRestartsDecay(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=-1)[source]

Learning rate scheduler that combines a Cosine annealing with warm restarts pattern, but also adds a decay factor for where the learning rate restarts at.

Methods

decay_base_lr(curr_iter, n_epochs, n_iter)

Learning rate scheduler that combines a Cosine annealing with warm restarts pattern, but also adds a decay factor for where the learning rate restarts at.

get_last_lr()

Return last computed learning rate by current scheduler.

load_state_dict(state_dict)

Loads the schedulers state.

print_lr(is_verbose, group, lr[, epoch])

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

step([epoch])

Step could be called after every batch update

get_lr

decay_base_lr(curr_iter, n_epochs, n_iter)[source]

Learning rate scheduler that combines a Cosine annealing with warm restarts pattern, but also adds a decay factor for where the learning rate restarts at.

class CerebNet.utils.lr_scheduler.CosineLR(base_lr, eta_min, max_epoch)[source]

Learning rate scheduler that follows a Cosine trajectory.

Methods

get_epoch_lr(cur_epoch)

Retrieves the lr for the given epoch (as specified by the lr policy).

lr_func_cosine(cur_epoch)

Get the learning rate following a cosine pattern for the epoch cur_epoch.

set_lr(optimizer, epoch)

Sets the optimizer lr to the specified value.

get_epoch_lr(cur_epoch)[source]

Retrieves the lr for the given epoch (as specified by the lr policy).

Parameters:
cur_epochint

The number of epoch of the current training stage.

lr_func_cosine(cur_epoch)[source]

Get the learning rate following a cosine pattern for the epoch cur_epoch.

Parameters:
cur_epochint

The number of epoch of the current training stage.

set_lr(optimizer, epoch)[source]

Sets the optimizer lr to the specified value.

Parameters:
optimizertorch.optim.Optimizer

The optimizer using to optimize the current network.

epochint

The epoch for which to update the learning rate.

class CerebNet.utils.lr_scheduler.ReduceLROnPlateauWithRestarts(optimizer, *args, T_0=10, Tmult=1, lr_restart=None, **kwargs)[source]

Extends the ReduceLROnPlateau class with the restart ability.

Attributes

in_cooldown

Methods

get_last_lr()

Return last computed learning rate by current scheduler.

load_state_dict(state_dict)

Loads the schedulers state.

print_lr(is_verbose, group, lr[, epoch])

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

step(metrics[, epoch])

Perfroms an optimization step.

get_lr

is_better

step(metrics, epoch=None)[source]

Perfroms an optimization step.

Parameters:
metricsfloat

The value of matrics= used to determine learning rate adjustments.

epochint, default=None

Number of epochs.

Notes

For details, refer to the PyTorch documentation for ReduceLROnPlateau at: https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html

class CerebNet.utils.lr_scheduler.WarmupCosineLR(optimizer, max_iters, warmup_factor=0.001, warmup_iters=1000, warmup_method='linear', last_epoch=-1)[source]

Learning Rate scheduler that combines a cosine schedule with a warmup phase.

Methods

get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()

Get the learning rates at the current epoch.

load_state_dict(state_dict)

Loads the schedulers state.

print_lr(is_verbose, group, lr[, epoch])

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

step

get_lr()[source]

Get the learning rates at the current epoch.

CerebNet.utils.lr_scheduler.get_lr_scheduler(optimizer, cfg)[source]

Build a learning rate scheduler object from the config data in cfg.