CerebNet.utils.lr_scheduler¶
- class CerebNet.utils.lr_scheduler.CosineAnnealingWarmRestartsDecay(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=-1)[source]¶
Learning rate scheduler that combines a Cosine annealing with warm restarts pattern, but also adds a decay factor for where the learning rate restarts at.
Methods
decay_base_lr
(curr_iter, n_epochs, n_iter)Learning rate scheduler that combines a Cosine annealing with warm restarts pattern, but also adds a decay factor for where the learning rate restarts at.
get_last_lr
()Return last computed learning rate by current scheduler.
get_lr
()Compute the initial learning rate.
load_state_dict
(state_dict)Load the scheduler's state.
print_lr
(is_verbose, group, lr[, epoch])Display the current learning rate.
state_dict
()Return the state of the scheduler as a
dict
.step
([epoch])Step could be called after every batch update.
- class CerebNet.utils.lr_scheduler.CosineLR(base_lr, eta_min, max_epoch)[source]¶
Learning rate scheduler that follows a Cosine trajectory.
Methods
get_epoch_lr
(cur_epoch)Retrieves the lr for the given epoch (as specified by the lr policy).
lr_func_cosine
(cur_epoch)Get the learning rate following a cosine pattern for the epoch
cur_epoch
.set_lr
(optimizer, epoch)Sets the optimizer lr to the specified value.
- get_epoch_lr(cur_epoch)[source]¶
Retrieves the lr for the given epoch (as specified by the lr policy).
- Parameters:
- cur_epoch
int
The number of epoch of the current training stage.
- cur_epoch
- class CerebNet.utils.lr_scheduler.ReduceLROnPlateauWithRestarts(optimizer, *args, T_0=10, Tmult=1, lr_restart=None, **kwargs)[source]¶
Extends the ReduceLROnPlateau class with the restart ability.
Attributes
in_cooldown
Methods
get_last_lr
()Return last computed learning rate by current scheduler.
get_lr
()Compute learning rate using chainable form of the scheduler.
load_state_dict
(state_dict)Load the scheduler's state.
print_lr
(is_verbose, group, lr[, epoch])Display the current learning rate.
state_dict
()Return the state of the scheduler as a
dict
.step
(metrics[, epoch])Performs an optimization step.
is_better
- step(metrics, epoch=None)[source]¶
Performs an optimization step.
- Parameters:
Notes
For details, refer to the PyTorch documentation for
ReduceLROnPlateau
at: https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html
- class CerebNet.utils.lr_scheduler.WarmupCosineLR(optimizer, max_iters, warmup_factor=0.001, warmup_iters=1000, warmup_method='linear', last_epoch=-1)[source]¶
Learning Rate scheduler that combines a cosine schedule with a warmup phase.
Methods
get_last_lr
()Return last computed learning rate by current scheduler.
get_lr
()Get the learning rates at the current epoch.
load_state_dict
(state_dict)Load the scheduler's state.
print_lr
(is_verbose, group, lr[, epoch])Display the current learning rate.
state_dict
()Return the state of the scheduler as a
dict
.step
([epoch])Perform a step.